Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter

Total Page:16

File Type:pdf, Size:1020Kb

Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter Amir Abboud∗ Fabrizio Grandoni† Virginia Vassilevska Williams‡ Abstract The graph centrality of a node w is the inverse of Measuring the importance of a node in a network is a major its maximum distance to any other node. The closeness goal in the analysis of social networks, biological systems, centrality of w is the inverse of the total distance of w transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For to all the other nodes. The reach centrality of w is the example, the center of a graph is a node that minimizes the maximum distance between w and the closest endpoint maximum distance to any other node (the latter distance is of any s-t shortest path passing through w.Informally, the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. the betweenness centrality of w measures the fraction of Informally, the betweenness centrality of a node w measures the shortest paths having w as an intermediate node. fraction of shortest paths that have w as an intermediate node. In this paper we study four fundamental graph cen- Finally, the reach centrality of a node w is the smallest distance r such that any s-t shortest path passing through w has either s trality computational problems associated with the men- or t in the ball of radius r around w. tioned centrality measures. Let G =(V,E) be an n-node The fastest known algorithms to compute the center and m-edge (directed or undirected) graph, with integer edge the median of a graph, and to compute the betweenness or reach weights w : E 0,...,M for some M 12.Let centrality even of a single node take roughly cubic time in the →{ } ≥ number n of nodes in the input graph. It is open whether these dG(s, t) denote the distance from node s to node t,and problems admit truly subcubic algorithms, i.e. algorithms with let us use d(s, t) instead when G is clear from the context. ˜ 3 δ 1 running time O(n − ) for some constant δ>0 . Let also σ be the number of distinct shortest paths from We relate the complexity of the mentioned centrality prob- s,t lems to two classical problems for which no truly subcubic al- s to t,andσs,t(b) be the number of such paths that use gorithm is known, namely All Pairs Shortest Paths (APSP) and node b as an intermediate node. Diameter. We show that Radius, Median and Betweenness Cen- trality are equivalent under subcubic reductions to APSP, i.e. The Radius problem is to compute R∗ := that a truly subcubic algorithm for any of these problems im- • minr V maxv V d(r∗,v) (radius of the graph). plies a truly subcubic algorithm for all of them. We then show ∗∈ ∈ that Reach Centrality is equivalent to Diameter under subcubic The Median problem is to compute M ∗ := reductions. The same holds for the problem of approximating • min d(m ,v) Betweenness Centrality within any constant factor. Thus the lat- m∗ V v V ∗ . ∈ ∈ ter two centrality problems could potentially be solved in truly The Reach Centrality problem (for a given node b)is subcubic time, even if APSP requires essentially cubic time. • ! to compute 1Introduction RC(b)= max min d(s, b),d(b, t) . s,t V : { { }} Identifying the importance of nodes in networks is a major d(s,t)=d(s,b∈ )+d(b,t) goal in the analysis of social networks (e.g., citation net- works, recommendation networks, or friendship circles), The Betweenness Centrality problem (for a given • biological systems (e.g., protein interaction networks), node b)istocompute computer networks (e.g., the Internet or peer-to-peer net- σ (b) works), transportation networks (e.g., public transporta- BC(b) := s,t . σs,t tion or road networks), etc. A variety of graph theoretic s,t V b ,s=t notions of node importance have been proposed, among ∈ "−{ } ̸ the most relevant ones: betweenness centrality [20], graph All of these notions are related in one way or another centrality [29], closeness centrality [47], and reach cen- to shortest paths. In particular, we can solve the first three trality [28]. problems by running an algorithm for the classical All- Pairs Shortest Paths problem (APSP) on the underlying ∗Stanford University, [email protected]. graph and doing a negligible amount of post-processing. [email protected] †IDSIA, University of Lugano, .Partially The same holds for Betweenness Centrality by assuming supported by the ERC Starting Grant NEWNET 279352. ‡Stanford University, [email protected] NSF Grant CCF-1417238, BSF Grant BSF:2012338 and a Stanford SOE 2Though we focus here on non-negative weights, our results can be Hoover Fellowship. extended to the case of directed graphs with possibly negative weights 1The O˜ notation suppresses poly-logarithmic factors in n and M. and no negative cycles. that shortest paths are unique: we will next make this as- the famous results on 3-SUM hardness starting with the sumption unless differently stated3.Usingthebestknown work of Gajentaan and Overmars [21]. algorithms for APSP [55], this leads to a slightly subcu- In this paper we exploit both APSP and Diameter as bic (by an no(1) factor) running time for the considered our prototypical problem and prove a collection of sub- problems, and no faster algorithm is known. cubic equivalences with the above graph centrality prob- Each of these problems however only asks for the lems. Recall that the Diameter problem is to compute the computation of a single number. It is natural to ask, is largest distance in the graph. There is a trivial subcubic solving APSP necessary? Could it be that these problems reduction from Diameter to APSP and, although no truly admit much more efficient solutions? In particular, do subcubic algorithm is known for Diameter, finding a re- they admit a truly subcubic4 algorithm? duction in the opposite direction is one of the big open Besides the fundamental interest in understanding the questions in this area: can we compute the largest distance relations between such basic computational problems (can faster than we can compute all the distances? Radius be solved truly faster than APSP?), these ques- tions are well motivated from a practical viewpoint. As 1.2 Subcubic equivalences with APSP. Our first main evidence to the necessity of faster algorithms for the men- result is to show that Radius, Median and Betweenness tioned centrality problems, we remark that some papers Centrality are equivalent to APSP under subcubic reduc- presenting algorithms for Betweenness Centrality [6] and tions! Therefore, we add three quite different problems to Median [30] have received more than a thousand citations the list of APSP-hard problems [53] and if any of these each. problems can be solved in truly subcubic time then all of them can. 1.1 Approach. In this paper we address these ques- tions with an approach which can be viewed as a refine- THEOREM 1.1. Radius, Median, and Betweenness Cen- ment of NP-completeness. The approach strives to prove, trality are equivalent to APSP under subcubic reductions. via combinatorial reductions,thatimprovingonagiven upper bound for a computational problem B would yield Unfortunately, this is strong evidence that a truly sub- breakthrough algorithms for many other famous and well- cubic algorithm for computing these centrality measures studied problems. At high-level, the idea is to consider a is unlikely to exist (or at least very hard to find) since it given prototypical problem A for which the fastest known would imply a huge and unexpected algorithmic break- algorithm has running time O˜(nc) (here n is a size pa- through. c ε We find the APSP-hardnessresult for Radius quite in- rameter). Then we show that a O˜(n − )-time algorithm for a second problem B,forsomeconstantε>0,would teresting since, prior to our work, there was no good rea- c δ son to believe that Radius might be a truly harder problem imply a O˜(n − )-time algorithm for problem A for some other constant δ>0.Thiscanbeusedasevidencethata than Diameter. Indeed, in terms of approximation algo- c ε rithms, any known algorithm to approximate the diameter O˜(n − ) time algorithm for problem B is unlikely to ex- ist (or at least very hard to find). For c =3areduction can be converted to also approximate the radius in undi- of the above kind is called a subcubic reduction [53] from rected graphs within the same factor [3, 5, 10, 45]. Fur- AtoB.WesaythattwoproblemsA and B are equiva- thermore, the exact algorithms for Diameter and Radius lent under subcubic reductions if there exists a subcubic in graphs with small integer weights are also extremely reduction from A to B and from B to A.Inotherterms,a similar [13]. Our results seem to indicate the following truly subcubic algorithm for one problem implies a truly bizarre phenomenon. In dense graphs, Radius seems to subcubic algorithm for the other and vice versa. be harder than Diameter, as it is actually equivalent to Vassilevska Williams and Williams [53] introduced APSP, whereas a Diameter/APSP equivalence seems elu- this approach to the realm of Graph Algorithms to show sive. In sparse graphs, however, Diameter seems more the subcubic equivalence between APSP and a list of difficult than Radius since a known reduction [45] from seven other problems, including: deciding if an edge- CNF-SAT seems to imply that even approximatingthe Di- weighted graph has a triangle with negative total weight ameter in subquadratic time is hard.
Recommended publications
  • Quantum Algorithms for Matching Problems
    Quantum Algorithms for Matching Problems Sebastian D¨orn Institut f¨urTheoretische Informatik, Universit¨atUlm, 89069 Ulm, Germany [email protected] Abstract. We present quantum algorithms for the following matching problems in unweighted and weighted graphs with n vertices and m edges: √ – Finding a maximal matching in general graphs in time O( nm log2 n). √ – Finding a maximum matching in general graphs in time O(n m log2 n). – Finding a maximum weight matching in bipartite graphs in time √ O(n mN log2 n), where N is the largest edge weight. – Finding a minimum weight perfect matching in bipartite graphs in √ time O(n nm log3 n). These results improve the best classical complexity bounds for the corre- sponding problems. In particular, the second result solves an open ques- tion stated in a paper by Ambainis and Spalekˇ [AS06]. 1 Introduction A matching in a graph is a set of edges such that for every vertex at most one edge of the matching is incident on the vertex. The task is to find a matching of maximum cardinality. The matching problem has many important applications in graph theory and computer science. In this paper we study the complexity of algorithms for the matching prob- lems on quantum computers and compare these to the best known classical algo- rithms. We will consider different versions of the matching problems, depending on whether the graph is bipartite or not and whether the graph is unweighted or weighted. For unweighted graphs, the best classical algorithm for finding a match- ing of maximum√ cardinality is based on augmenting paths and has running time O( nm) (see Micali and Vazirani [MV80]).
    [Show full text]
  • Lecture 18 Solving Shortest Path Problem: Dijkstra's Algorithm
    Lecture 18 Solving Shortest Path Problem: Dijkstra’s Algorithm October 23, 2009 Lecture 18 Outline • Focus on Dijkstra’s Algorithm • Importance: Where it has been used? • Algorithm’s general description • Algorithm steps in detail • Example Operations Research Methods 1 Lecture 18 One-To-All Shortest Path Problem We are given a weighted network (V, E, C) with node set V , edge set E, and the weight set C specifying weights cij for the edges (i, j) ∈ E. We are also given a starting node s ∈ V . The one-to-all shortest path problem is the problem of determining the shortest path from node s to all the other nodes in the network. The weights on the links are also referred as costs. Operations Research Methods 2 Lecture 18 Algorithms Solving the Problem • Dijkstra’s algorithm • Solves only the problems with nonnegative costs, i.e., cij ≥ 0 for all (i, j) ∈ E • Bellman-Ford algorithm • Applicable to problems with arbitrary costs • Floyd-Warshall algorithm • Applicable to problems with arbitrary costs • Solves a more general all-to-all shortest path problem Floyd-Warshall and Bellman-Ford algorithm solve the problems on graphs that do not have a cycle with negative cost. Operations Research Methods 3 Lecture 18 Importance of Dijkstra’s algorithm Many more problems than you might at first think can be cast as shortest path problems, making Dijkstra’s algorithm a powerful and general tool. For example: • Dijkstra’s algorithm is applied to automatically find directions between physical locations, such as driving directions on websites like Mapquest or Google Maps.
    [Show full text]
  • Faster Shortest-Path Algorithms for Planar Graphs
    Journal of Computer and System SciencesSS1493 journal of computer and system sciences 55, 323 (1997) article no. SS971493 Faster Shortest-Path Algorithms for Planar Graphs Monika R. Henzinger* Department of Computer Science, Cornell University, Ithaca, New York 14853 Philip Klein- Department of Computer Science, Brown Univrsity, Providence, Rhode Island 02912 Satish Rao NEC Research Institute and Sairam Subramanian Bell Northern Research Received April 18, 1995; revised August 13, 1996 and matching). In this paper we improved algorithms for We give a linear-time algorithm for single-source shortest paths in single-source shortest paths in planar networks. planar graphs with nonnegative edge-lengths. Our algorithm also The first algorithm handles the important special case of yields a linear-time algorithm for maximum flow in a planar graph nonnegative edge-lengths. For this case, we obtain a linear- with the source and sink on the same face. For the case where time algorithm. Thus our algorithm is optimal, up to con- negative edge-lengths are allowed, we give an algorithm requiring O(n4Â3 log(nL)) time, where L is the absolute value of the most stant factors. No linear-time algorithm for shortest paths in negative length. This algorithm can be used to obtain similar bounds for planar graphs was previously known. For general graphs computing a feasible flow in a planar network, for finding a perfect the best bounds known in the standard model, which for- matching in a planar bipartite graph, and for finding a maximum flow in bids bit-manipulation of the lengths, is O(m+n log n) time, a planar graph when the source and sink are not on the same face.
    [Show full text]
  • Shortest Path Problems and Algorithms
    Shortest path problems and algorithms 1. Shortest path problems . Shortest path problem . Single-source shortest path problem . Single-destination shortest path problem . All-pairs shortest path problem 2. Algorithms . Dijkstra’s algorithm for single source shortest path problem (positive weight) . Bellman–Ford algorithm for single source shortest path problem (allow negative weight) . Floyd–Warshall algorithm for all pairs shortest paths . Johnson's algorithm for all pairs shortest paths CP412 ADAII 1. Shortest path problems Shortest path problem: Input: weight graph G = (V, E, W), source vertex s and destination vertex t. Output: a shortest path of G from s to t. Single-source shortest path problem: Output: shortest paths from source vertex s to all other vertices of G. Single-destination shortest path problem: Output: shortest paths from all vertices of G to a single destination t. All-pairs shortest path problem: Output: shortest paths between every pair of vertices u, v in the graph. CP412 ADAII History of SPP and algorithms Shimbel (1955). Information networks. Ford (1956). RAND, economics of transportation. Leyzorek, Gray, Johnson, Ladew, Meaker, Petry, Seitz (1957). Combat Development Dept. of the Army Electronic Proving Ground. Dantzig (1958). Simplex method for linear programming. Bellman (1958). Dynamic programming. Moore (1959). Routing long-distance telephone calls for Bell Labs. Dijkstra (1959). Simpler and faster version of Ford's algorithm. CP412 ADAII Dijkstra’s algorithms Edger W. Dijkstra’s quotes . The question of whether computers can think is like the question of whether submarines can swim. In their capacity as a tool, computers will be but a ripple on the surface of our culture.
    [Show full text]
  • The On-Line Shortest Path Problem Under Partial Monitoring
    Journal of Machine Learning Research 8 (2007) 2369-2403 Submitted 4/07; Revised 8/07; Published 10/07 The On-Line Shortest Path Problem Under Partial Monitoring Andras´ Gyor¨ gy [email protected] Machine Learning Research Group Computer and Automation Research Institute Hungarian Academy of Sciences Kende u. 13-17, Budapest, Hungary, H-1111 Tamas´ Linder [email protected] Department of Mathematics and Statistics Queen’s University, Kingston, Ontario Canada K7L 3N6 Gabor´ Lugosi [email protected] ICREA and Department of Economics Universitat Pompeu Fabra Ramon Trias Fargas 25-27 08005 Barcelona, Spain Gyor¨ gy Ottucsak´ [email protected] Department of Computer Science and Information Theory Budapest University of Technology and Economics Magyar Tudosok´ Kor¨ utja´ 2. Budapest, Hungary, H-1117 Editor: Leslie Pack Kaelbling Abstract The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1=pn and depends only polynomially on the number of edges of the graph.
    [Show full text]
  • Econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Nikulin, Yury Working Paper — Digitized Version Solving the robust shortest path problem with interval data using a probabilistic metaheuristic approach Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel, No. 597 Provided in Cooperation with: Christian-Albrechts-University of Kiel, Institute of Business Administration Suggested Citation: Nikulin, Yury (2005) : Solving the robust shortest path problem with interval data using a probabilistic metaheuristic approach, Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel, No. 597, Universität Kiel, Institut für Betriebswirtschaftslehre, Kiel This Version is available at: http://hdl.handle.net/10419/147655 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel No.
    [Show full text]
  • Shared Shortest Paths in Graphs
    Shared Shortest Paths in Graphs Ronald Fenelus, Florida International University Zeal Jagannatha, Ohio Wesleyan University Mentor: Sean McCulloch Department of Mathematics and Computer Science, Ohio Wesleyan University Graph Theory Background NP-Complete Nash Equilibrium Finder The Shared Shortest Path Problem (SSPP) asks A Nash Equilibrium can be found algorithmically Basics how to route paths in a graph to minimize cost when Finding minimum total cost solutions to the SSPP is from any valid configuration, such as one derived paths split evenly the costs of journeys they mutually classified as an NP-Complete problem. from the above approximation methods, of the SSPP A graph is a set of vertices and a set of edges. Each use. game. edge consists of a pair of verticies. This can be determined by a simple polynomial time reduction from the Steiner Tree Problem which asks The algorithm for finding an equilibrium simply In the case of the SSPP, graphs are weighted what is the minimum total weight tree that connects Game Theory checks whether any journey can make a better path meaning in addition to a pair of verticies each edge a set of points R. These points are then treated as choice and moves this journey if it can. has a cost associated to it. start and destination points of journeys in an One method of analysis takes a game theoretic instance of the SSPP problem. approach to the problem, treating each individual Because the SSPP game is a congestion game this Graphs may be directed or undirected, meaning that journey as a selfish agent.
    [Show full text]
  • Map-Matching Using Shortest Paths
    Map-Matching Using Shortest Paths Erin Chambers Brittany Terese Fasy Department of Computer Science School of Computing and Dept. Mathematical Sciences Saint Louis University Montana State University Saint Louis, Missouri, USA Montana, USA [email protected] [email protected] Yusu Wang Carola Wenk Department of Computer Science and Engineering Department of Computer Science The Ohio State University Tulane University Columbus, Ohio, USA New Orleans, Louisianna, USA [email protected] [email protected] ABSTRACT desired path Q should correspond to the actual path in G that the We consider several variants of the map-matching problem, which person has traveled. Map-matching algorithms in the literature [1, seeks to find a path Q in graph G that has the smallest distance to 2, 6] consider all possible paths in G as potential candidates for Q, a given trajectory P (which is likely not to be exactly on the graph). and apply similarity measures such as Hausdorff or Fréchet distance In a typical application setting, P models a noisy GPS trajectory to compare input curves. from a person traveling on a road network, and the desired path Q We propose to restrict the set of potential paths in G to a natural should ideally correspond to the actual path in G that the person subset: those paths that correspond to shortest paths, or concatena- has traveled. Existing map-matching algorithms in the literature tions of shortest paths, in G. Restricting the set of paths to which consider all possible paths in G as potential candidates for Q. We a path can be matched makes sense in many settings.
    [Show full text]
  • The Shortest Path Problem Revisited: Optimal Routing for Electric Vehicles
    The Shortest Path Problem Revisited: Optimal routing for Electric Vehicles Martin Sachenbacher Technische Universit¨atM¨unchen, Department of Informatics Boltzmannstraße 3, 85748 Garching, Germany {sachenba}@in.tum.de Introduction. Electric vehicles (EV) powered by This problem is similar to the shortest path prob- batteries will likely play a significant role in the road lem (SP), which consists of finding a path P in a graph traffic of the future: they can be powered by regenera- from a source vertex s to a destination vertex t such tive energy sources such as wind and solar power, and that c(P ) = minQ2U (c(Q)), where U is the set of all they can recover some of their kinetic and/or poten- paths from s to t, and c : E ! Z is a weight func- tial energy during deceleration phases. However, the tion on the edges E of the graph. SP is polynomial; unique characteristics of EVs { limited cruising range, the best known algorithm for the case of non-negative long recharge times, and the ability to regain energy edges is Dijkstra (Dijkstra 1959) with time complexity during deceleration { have an impact on algorithms O(n2), while in the general case, Bellman-Ford (Bell- used in navigation systems and route planners, since man 1958), (L. R. Ford 1956) has O(n3). However, it now becomes more important to determine the most most commonly used SP algorithms like contraction hi- economical route rather than just the fastest or short- erarchies (Geisberger et al. 2008), highway hierarchies est one. This modification might appear insignificant at (Sanders and Schultes 2005) and transit vertex rout- first glance, as it seems enough to simply exchange time ing (Bast et al.
    [Show full text]
  • Some NP-Complete Problems
    Appendix A Some NP-Complete Problems To ask the hard question is simple. But what does it mean? What are we going to do? W.H. Auden In this appendix we present a brief list of NP-complete problems; we restrict ourselves to problems which either were mentioned before or are closely re- lated to subjects treated in the book. A much more extensive list can be found in Garey and Johnson [GarJo79]. Chinese postman (cf. Sect. 14.5) Let G =(V,A,E) be a mixed graph, where A is the set of directed edges and E the set of undirected edges of G. Moreover, let w be a nonnegative length function on A ∪ E,andc be a positive number. Does there exist a cycle of length at most c in G which contains each edge at least once and which uses the edges in A according to their given orientation? This problem was shown to be NP-complete by Papadimitriou [Pap76], even when G is a planar graph with maximal degree 3 and w(e) = 1 for all edges e. However, it is polynomial for graphs and digraphs; that is, if either A = ∅ or E = ∅. See Theorem 14.5.4 and Exercise 14.5.6. Chromatic index (cf. Sect. 9.3) Let G be a graph. Is it possible to color the edges of G with k colors, that is, does χ(G) ≤ k hold? Holyer [Hol81] proved that this problem is NP-complete for each k ≥ 3; this holds even for the special case where k =3 and G is 3-regular.
    [Show full text]
  • Tree-Breadth of Graphs with Variants and Applications
    Tree-Breadth of Graphs with Variants and Applications A dissertation submitted to Kent State University in partial fulfilment of the requirements for the degree of Doctor of Philosophy by Arne Leitert August 2017 Dissertation written by Arne Leitert Diplom, University of Rostock, 2012 Ph.D., Kent State University, 2017 Approved by Dr. Feodor F. Dragan , Chair, Doctoral Dissertation Committee Dr. Ruoming Jin , Members, Doctoral Dissertation Committee Dr. Ye Zhao , Dr. Artem Zvavitch , Dr. Lothar Reichel Accepted by Dr. Javed Khan , Chair, Department of Computer Science Dr. James L. Blank , Dean, College of Arts and Sciences Contents Contents iii List of Figures vi List of Tables vii 1 Introduction 1 1.1 Existing and Recent Results . .2 1.2 Outline . .3 2 Preliminaries 4 2.1 General Definitions . .4 2.2 Tree-Decompositions . .6 2.3 Layering Partitions . .7 2.4 Special Graph Classes . .8 3 Computing Decompositions with Small Breadth 12 3.1 Approximation Algorithms . 12 3.1.1 Layering Based Approaches . 13 3.1.2 Neighbourhood Based Approaches . 15 3.2 Strong Tree-Breadth . 20 3.2.1 NP-Completeness . 20 3.2.2 Polynomial Time Cases . 26 3.3 Computing Decompositions for Special Graph Classes . 31 iii 4 Connected r-Domination 38 4.1 Using a Layering Partition . 39 4.2 Using a Tree-Decomposition . 49 4.2.1 Preprocessing . 50 4.2.2 r-Domination . 53 4.2.3 Connected r-Domination . 55 4.3 Implications for the p-Center Problem . 63 5 Bandwidth and Line-Distortion 65 5.1 Existing Results . 66 5.2 k-Dominating Pairs .
    [Show full text]
  • Bellman Ford and Floyd Warshall
    CS161 Lecture 12 Bellman-Ford and Floyd-Warshall Scribe by: Eric Huang (2015), Anthony Kim (2016), M. Wootters (2017) Date: August 2, 2021 (Based on Virginia Williams' lecture notes) 1 The Bellman-Ford Algorithm The Bellman-Ford algorithm is a dynamic programming algorithm, and dynamic programming is a basic paradigm in algorithm design used to solve problems by relying on intermediate solutions to smaller sub- problems. The main step for solving a dynamic programming problem is to analyze the problem's optimal substructure and overlapping subproblems. The Bellman-Ford algorithm is pretty simple to state: Algorithm 1: Bellman-Ford Algorithm (G; s) d(0)[v] = 18v 2 V d(0)[s] = 0 d(k)[v] = None8v 2 V 8k > 0 for k = 1; : : : ; n − 1 do d(k)[v] d(k−1)[v] for (u; v) 2 E do d(k)[v] minfd(k)[v]; d(k−1)[u] + w(u; v)g // here we can release the memory for d(k−1), we'll never need it again. return d(n)[v]; 8 v 2 V What's going on here? The value d(k)[v] is the cost of the shortest path from s to v with at most k edges in it. Once we realize this, a proof by induction (similar to the one in Lecture Notes 11) falls right out, with the inductive hypothesis that \d(k)[v] is the cost of the shortest path from s to v with at most k edges in it." Runtime and Storage. The runtime of the Bellman-Ford algorithm is O(mn); for n iterations, we loop through all the edges.
    [Show full text]