Algorithms and Data Structures

Total Page:16

File Type:pdf, Size:1020Kb

Algorithms and Data Structures Algorithms and Data Structures Marcin Sydow Introduction Algorithms and Data Structures Shortest Paths Shortest Paths Variants Relaxation DAG Marcin Sydow Dijkstra Algorithm Bellman- Web Mining Lab Ford PJWSTK All Pairs Topics covered by this lecture: Algorithms and Data Structures Marcin Sydow Shortest Path Problem Introduction Shortest Variants Paths Variants Relaxation Relaxation DAG DAG Dijkstra Nonnegative Weights (Dijkstra) Algorithm Bellman- Arbitrary Weights (Bellman-Ford) Ford (*)All-pairs algorithm All Pairs Example: Fire Brigade Algorithms and Data Structures Marcin Consider the following example. There is a navigating system Sydow for the re brigade. The task of the system is as follows. When Introduction there is a re at some point f in the city, the system has to Shortest Paths Variants quickly compute the shortest way from the re brigade base to Relaxation the point f . The system has access to full information about DAG the current topology of the city (the map includes all the streets Dijkstra and crossings in the city) and the estimated latencies on all Algorithm sections of streets between all crossings. Bellman- Ford All Pairs (notice that some streets can be unidirectional. Assume that the system is aware of this) The Shortest Paths Problem Algorithms and Data Structures INPUT: a graph G = (V ; E) with weights on edges given by a Marcin Sydow function w : E ! R, a starting node s 2 V Introduction OUTPUT: for each node v 2 V Shortest Paths Variants the shortest-path distance µ(s; v) from s to v, if it exists Relaxation the parent in a shortest-path tree (if it exists), to DAG reconstruct the shortest path Dijkstra Algorithm Bellman- The shortest path may not exist for some reasons: Ford All Pairs there may be no path from s to v in the graph there may be a negative cycle in the graph (there may be no lower bound on the length in such case) Variants Algorithms and Data Structures Marcin When we design the best algorithm for the shortest paths Sydow problem, we can exploit some special properties of the graph, Introduction for example: Shortest Paths Variants Relaxation the graph G is directed or undirected DAG the weights are non-negative or integer, etc. (each case Dijkstra can be approached dierently to obtain the most ecient Algorithm algorithm) Bellman- Ford the graph is acyclic, etc. All Pairs We will see dierent algorithms for dierent variants Shortest Paths - properties Algorithms and Data If G = (V ; E) is an arbitrary graph and s; v 2 V , we consider Structures the following convention for the value of µ(s; v), the Marcin Sydow shortest-path distance from s to v in G: Introduction Shortest s v (when there is no path from s to v) Paths µ( ; ) = +1 Variants µ(s; v) = −∞ (when there exists a path from s to v that Relaxation contains a negative cycle) DAG Dijkstra µ(s; v) = d 2 R (else) Algorithm Bellman- (we will use the denotation µ(v) instead of µ(s; v) if s is clear Ford from the context) All Pairs Notice the following important property of shortest paths. A subpath of a shortest path is itself a shortest path (i.e. if (v1; :::; vk−1; vk ) is a shortest path from v1 to vk then (v1; :::; vk−1) must be a shortest path from v1 to vk−1 simple proof by contradiction) The idea of computing shortest paths Algorithms and Data Structures The general idea is similar to BFS from a source node s. We Marcin Sydow keep two attributes with each node v: Introduction v.distance: that keeps the shortest (currently known) Shortest Paths distance from s to v Variants Relaxation v.parent: that keeps the predecessor of v on the shortest DAG (currently known) path from s Dijkstra Algorithm Initialisation: s.distance = 0, s.parent = s, and all other Bellman- Ford nodes have distance set to innity and parent to null. All Pairs The attributes are updated by propagation via edges: it is called a relaxation of an edge. The Concept of Edge Relaxation Algorithms and Data Structures relax((u,v)) # (u,v) is an edge in E if u.distance + w(u,v) < v.distance Marcin Sydow v.distance = u.distance + w(u,v) v.parent = u Introduction Shortest Paths The algorithms relax edges until the shortest paths have been Variants Relaxation found or a negative cycle has been discovered. DAG Dijkstra Relaxation has some important properties, eg: Algorithm (assuming the prior initialisation as described before) Bellman- - after any sequence of relaxations, v V v.distance v Ford 8 2 ≥ µ( ) (simple proof by induction on the number of edge relaxations) (the All Pairs distance discovered by relaxations never drops below the shortest distance) It can be proven (by induction) that the relaxations indeed are the right tool to compute the shortest paths. Correctness of Relaxation Algorithms and Data Structures Lemma Marcin After doing a sequence R of edge relaxations that contains (as Sydow a subsequence) a shortest path p = (e1; :::; ek ) from s to v, Introduction v:distance = µ(s; v). Shortest Paths Variants Relaxation Proof: Because p is a shortest path, we have v P w e . µ( ) = 1≤j≤k ( j ) DAG Let vi denote the target node of ei , for 0 < i ≤ k (v0 = s). We show, Dijkstra Algorithm by induction, that after the i-th relaxation Bellman- v distance P w e . It is true at the beginning (s.distance i : ≤ 1≤j≤i ( j ) Ford == 0). Then, after the i-th relaxation, All Pairs v distance v distance w e P w e (by relaxation i : ≤ i−1: + ( i ) ≤ 1≤j≤i ( j ) denition and induction). Thus, after k-th relaxation v:distance ≤ µ(v). But v:distance cannot be lower than µ(v) (previous slide), so that v:distance == µ(v) holds. Shortest Paths in a Directed Acyclic Graph (case 1) Algorithms and Data Structures Marcin A directed acyclic graph (DAG) G = (V ; E) is a simple case for Sydow computing shortest paths from a source s. Introduction Shortest First, any DAG can be topologically sorted (e.g. by DFS) in Paths Variants O(m + n) time (where m = jEj and n = jV j, and edge Relaxation traversing is the dominating operation), resulting in a sequence DAG of vertices (v1; :::; vn). Dijkstra Algorithm Then, assuming s = vj , for some 0 < j ≤ n, we can relax all Bellman- edges outgoing of vj , then all edges outgoing from vj 1, etc. Ford + All Pairs Thus, each edge is relaxed at most once. Since each relaxation has constant time, the total time complexity is O(m + n). Dijkstra's Algorithm: only non-negative weights (case 2) Algorithms Another case is when there are no negative weights on edges and Data Structures (thus, there are no negative cycles, but a cycle can exist so that Marcin topological sorting is not possible). Sydow Introduction The idea is as follows. If we relax edges in the order of Shortest Paths non-decreasing shortest-path distance from the source s, then Variants each edge is relaxed exactly once. Relaxation DAG high-level pseudo-code of Dijkstra's algorithm: Dijkstra Algorithm initialise Bellman- while(there is an unscanned node with finite distance) Ford u = minimum-distance unscanned node All Pairs relax all edges (u,v) u becomes scanned In the following proof of correctness, for any node v we will denote v:distance as d(v), for short. Correctness of the Dijkstra's Algorithm Algorithms It is quite easy to observe that each node v reachable from s will be and Data Structures scanned by the Dijkstra's algorithm (e.g. by induction on the unweighted Marcin length of the shortest path from s to v) Sydow To complete, we prove that when any node v is scanned, d v v . By Introduction ( ) = µ( ) Shortest condtradiction, let t be the rst moment in time when any node v is Paths Variants scanned with d(v) > µ(v) (i.e. v contradicts what we want to prove). Let Relaxation (s = v1; :::; vk = v) be the shortest path from s to this node and let i be DAG the smallest index on this path such that vi was not scanned before the Dijkstra Algorithm time t. Since v1 = s and d(s) = 0 = µ(s), it must be that i > 1. By the Bellman- denition of i, vi−1 was scanned before the time t and d(vi−1) = µ(vi−1) Ford when it is scanned (due to the denition of t). Thus, when vi 1 is All Pairs − scanned, d(vi ) is set to d(vi−1) + w(vi−1; vi ) = µ(vi−1) + w(vi−1; vi ) (because the path is a shortest one). To summarise, at time t we have: d(vi ) = µ(vi ) ≤ µ(vk ) < d(vk ) so that vi should be scanned instead of vk (and they are dierent due to the sharp inequality) a contradiction. Algorithm (Dijkstra) Algorithms (pq - a priority queue with decreaseKey operation, priority is the value of and Data distance attribute) Structures s.distance = 0 Marcin pq.insert(s) Sydow s.parent = s Introduction for-each v in V except s: Shortest Paths v.distance = INFINITY Variants v.parent = null Relaxation DAG while(!pq.isEmpty()) scannedNode = pq.delMin() Dijkstra Algorithm for-each v in scannedNode.adjList: if (v.distance > scannedNode.distance + w(scannedNode, v)) Bellman- Ford v.distance = scannedNode.distance + w(scannedNode, v) v.parent = scannedNode All Pairs if (pq.contains(v)) pq.decreaseKey(v) else pq.insert(v) (to eciently implement contains and decreaseKey operations of priority queue, an additional dictionary can be kept for mapping from nodes to their positions (pointers) in priority queue) Dijkstra - analysis Algorithms and Data Structures Marcin Sydow data size: n = |V|, m = |E| Introduction dominating operation: comparison of priorities (inside priority Shortest Paths Variants queue), attribute update Relaxation DAG initialisation: O(n) Dijkstra Algorithm loop: O(n × (delMin + insert) + m × decreaseKey) Bellman- Ford O(nlogn) + O(mlogn) = O((n + m)logn) (if binary Heap is All Pairs used) (*)Faster
Recommended publications
  • Boosting Local Search for the Maximum Independent Set Problem
    Bachelor Thesis Boosting Local Search for the Maximum Independent Set Problem Jakob Dahlum Abgabedatum: 06.11.2015 Betreuer: Prof. Dr. rer. nat. Peter Sanders Dr. rer. nat. Christian Schulz Dr. Darren Strash Institut für Theoretische Informatik, Algorithmik Fakultät für Informatik Karlsruher Institut für Technologie Hiermit versichere ich, dass ich diese Arbeit selbständig verfasst und keine anderen, als die angegebenen Quellen und Hilfsmittel benutzt, die wörtlich oder inhaltlich übernommenen Stellen als solche kenntlich gemacht und die Satzung des Karlsruher Instituts für Technologie zur Sicherung guter wissenschaftlicher Praxis in der jeweils gültigen Fassung beachtet habe. Ort, den Datum Abstract An independent set of a graph G = (V, E) with vertices V and edges E is a subset S ⊆ V, such that the subgraph induced by S does not contain any edges. The goal of the maximum independent set problem (MIS problem) is to find an independent set of maximum size. It is equivalent to the well-known vertex cover problem (VC problem) and maximum clique problem. This thesis consists of two main parts. In the first one we compare the currently best algorithms for finding near-optimal independent sets and vertex covers in large, sparse graphs. They are Iterated Local Search (ILS) by Andrade et al. [2], a heuristic that uses local search for the MIS problem and NuMVC by Cai et al. [6], a local search algorithm for the VC problem. As of now, there are no methods to solve these large instances exactly in any reasonable time. Therefore these heuristic algorithms are the best option. In the second part we analyze a series of techniques, some of which lead to a significant speed up of the ILS algorithm.
    [Show full text]
  • Shortest Paths
    10 Shortest Paths M 0 distance from M R 5 L 11 O 13 Q 15 G N 17 17 S 18 19 F K 20 H E P J C V W The problem of finding the shortest, quickest or cheapest path between two locations is ubiquitous. You solve it daily. When you are in a location s and want to move to a location t, you ask for the quickest path from s to t. The fire department may want to compute the quickest routes from a fire station s to all locations in town – the single- source all-destinations problem. Sometimes we may even want a complete distance table from everywhere to everywhere – the all-pairs problem. In an old fashioned road atlas, you will usually find an all-pairs distance table for the most important cities. Here is a route-planning algorithm that requires a city map (all roads are as- sumed to be two-way roads) and a lot of dexterity but no computer. Lay thin threads along the roads on the city map. Make a knot wherever roads meet, and at your starting position. Now lift the starting knot until the entire net dangles below it. If you have successfully avoided any tangles and the threads and your knots are thin enough so that only tight threads hinder a knot from moving down, the tight threads define the shortest paths. The illustration above shows a map of the campus of the Karlsruhe Institute of Technology1 and illustrates the route-planning algorithm for the source node M.
    [Show full text]
  • Fundamental Data Structures Contents
    Fundamental Data Structures Contents 1 Introduction 1 1.1 Abstract data type ........................................... 1 1.1.1 Examples ........................................... 1 1.1.2 Introduction .......................................... 2 1.1.3 Defining an abstract data type ................................. 2 1.1.4 Advantages of abstract data typing .............................. 4 1.1.5 Typical operations ...................................... 4 1.1.6 Examples ........................................... 5 1.1.7 Implementation ........................................ 5 1.1.8 See also ............................................ 6 1.1.9 Notes ............................................. 6 1.1.10 References .......................................... 6 1.1.11 Further ............................................ 7 1.1.12 External links ......................................... 7 1.2 Data structure ............................................. 7 1.2.1 Overview ........................................... 7 1.2.2 Examples ........................................... 7 1.2.3 Language support ....................................... 8 1.2.4 See also ............................................ 8 1.2.5 References .......................................... 8 1.2.6 Further reading ........................................ 8 1.2.7 External links ......................................... 9 1.3 Analysis of algorithms ......................................... 9 1.3.1 Cost models ......................................... 9 1.3.2 Run-time analysis
    [Show full text]
  • Route Planning in Transportation Networks
    Route Planning in Transportation Networks Hannah Bast1, Daniel Delling2, Andrew Goldberg3, B Matthias M¨uller-Hannemann4, Thomas Pajor5( ), Peter Sanders6, Dorothea Wagner6, and Renato F. Werneck3 1 University of Freiburg, Freiburg im Breisgau, Germany [email protected] 2 Apple Inc., Cupertino, USA [email protected] 3 Amazon, Seattle, USA {andgold,werneck}@amazon.com 4 Martin-Luther-Universit¨at Halle-Wittenberg, Halle, Germany [email protected] 5 Microsoft Research, Mountain View, USA [email protected] 6 Karlsruhe Institute of Technology, Karlsruhe, Germany {sanders,dorothea.wagner}@kit.edu Abstract. We survey recent advances in algorithms for route plan- ning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continen- tal scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algo- rithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based trans- portation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs. 1 Introduction This survey is an introduction to the state of the art in the area of practical algo- rithms for routing in transportation networks.
    [Show full text]
  • Route Planning in Transportation Networks∗ Arxiv:1504.05140V1
    Route Planning in Transportation Networks∗ Hannah Bast1, Daniel Delling2, Andrew Goldberg3, Matthias Müller-Hannemann4, Thomas Pajor5, Peter Sanders6, Dorothea Wagner7 and Renato F. Werneck8 1University of Freiburg, Germany, [email protected] 2Sunnyvale, USA, [email protected] 3Amazon, USA, [email protected] 4Martin-Luther-Universität Halle-Wittenberg, Germany, [email protected] 5Microsoft Research, USA, [email protected] 6Karlsruhe Institute of Technology, Germany, [email protected] 7Karlsruhe Institute of Technology, Germany, [email protected] 8Amazon, USA, [email protected] April 17, 2015 Abstract We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted arXiv:1504.05140v1 [cs.DS] 20 Apr 2015 modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs. ∗This is an updated version of the technical report MSR-TR-2014-4, previously published by Microsoft Research. This work was mostly done while the authors Daniel Delling, Andrew Goldberg, and Renato F.
    [Show full text]
  • References and Index
    FREE References [1] E. H. L. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines. Wiley, 1989. [2] J. Abello, A. Buchsbaum,and J. Westbrook. A functionalapproach to external graph algorithms. Algorithmica, 32(3):437–458, 2002. [3] W. Ackermann. Zum hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, 99:118–133, 1928. [4] G. M. Adel’son-Vel’skii and E. M. Landis. An algorithm for the organization of information. Soviet Mathematics Doklady, 3:1259–1263, 1962. [5] A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. CommunicationsCOPY of the ACM, 31(9):1116–1127, 1988. [6] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. [7] A. V. Aho, B. W. Kernighan, and P. J. Weinberger. The AWK Programming Language. Addison-Wesley, 1988. [8] R. K. Ahuja, R. L. Magnanti, and J. B. Orlin. Network Flows. Prentice Hall, 1993. [9] R. K. Ahuja, K. Mehlhorn, J. B. Orlin, and R. E. Tarjan. Faster algorithms for the shortest path problem. Journal of the ACM, 3(2):213–223, 1990. [10] N. Alon, M. Dietzfelbinger, P. B. Miltersen, E. Petrank, and E. Tardos. Linear hash functions. Journal of the ACM, 46(5):667–683, 1999. [11] A. Andersson, T. Hagerup, S. Nilsson, and R. Raman. Sorting in linear time? Journal of Computer and System Sciences, 57(1):74–93, 1998. [12] F. Annexstein, M. Baumslag, and A. Rosenberg. Group action graphs and parallel architectures. SIAM Journal on Computing, 19(3):544–569, 1990. [13] D.
    [Show full text]
  • Implementation and Complexity of the Watershed-From-Markers Algorithm
    EUROGRAPHICS 2001 / A. Chalmers and T.-M. Rhyne Volume 20 (2001), Number 3 (Guest Editors) Implementation and Complexity of the Watershed-from-Markers Algorithm Computed as a Minimal Cost Forest Petr Felkel VrVis Center, Lothringerstraße. 16/4, A-1030 Vienna, Austria, www.vrvis.at, contact address: [email protected] Mario Bruckschwaiger and Rainer Wegenkittl TIANI Medgraph GmbH, Campus 21, Liebermannstraße A01 304, A-2345 Brunn am Gebirge, Austria, www.tiani.com Abstract The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al.1 published a prin- ciple of the watershed computation by means of an Image Foresting Transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m +C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of the IFT algorithm. All presented im- plementations keep the time complexity of the standard priority queue with buckets but the best one minimizes the costly memory allocation and needs only 19–45% of memory for typical 3D medical imaging datasets.
    [Show full text]
  • Evolutionary Algorithms for Independent Sets
    Bachelor Thesis Evolutionary Algorithms For Independent Sets Sebastian Lamm 1633214 September 29, 2014 Supervisors: Prof. Dr. Peter Sanders Dr. Christian Schulz Institute of Theoretical Informatics, Algorithmics II Department of Informatics Karlsruhe Institute of Technology Hiermit versichere ich, dass ich diese Arbeit selbstst¨andigverfasst und keine anderen, als die angegebenen Quellen und Hilfsmittel benutzt, die w¨ortlich oder inhaltlich ¨ubernommenen Stellen also solche kenntlich gemacht und die Satzung des Karlsruher Instituts f¨urTechnologie zur Sicherung guter wissenschaftlicher Praxis in der jeweils g¨ultigenFassung beachtet habe. Karlsruhe, 29.09.2014 Sebastian Lamm Abstract An independent set of a graph G = (V; E) is a subset S ⊆ V , such that there are no adjacent nodes in S. The independent set problem is that of finding the maximum cardinality set among all possible independent sets. We present a novel evolutionary algorithm for this problem. Our algorithm starts by generating a population of individuals by using greedy algorithms to create initial independent sets. We then use tournament selection for picking individuals, called parents, for mating. We introduce four new ways to combine the selected individuals. The first procedure creates new individuals by building the overlap of the parents independent sets. The other schemes make use of graph partitioning to produce new offsprings applying both 2-way and k-way partitions as well as node separators. We use them to quickly exchange whole parts of the given independent sets. For example, we make use of node separators obtained from the partitioner to generate new independent sets in the first of these combine operators. We also provide an indirect approach by using vertex covers and partitions in our third operator.
    [Show full text]
  • In Search of the Densest Subgraph
    algorithms Article In Search of the Densest Subgraph András Faragó and Zohre R. Mojaveri * Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, P.O.B. 830688, MS-EC31, Richardson, TX 75080, USA * Correspondence: [email protected] Received: 23 June 2019; Accepted: 30 July 2019; Published: 2 August 2019 Abstract: In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field. Keywords: dense subgraph; algorithms; graph density; clusters in graphs; big data 1. Introduction and Motivation In the era of big data, graph-based representations became highly popular to model various real-world systems, as well as diverse types of knowledge and data, due to the simplicity and visually appealing nature of graph models. The specific task of extracting a dense subgraph from a large graph has received a lot of attention, since it has direct applications in many fields. While the era of big data is still relatively young, the study and application of dense subgraphs started much earlier.
    [Show full text]
  • Algorithms for Grey-Weighted Distance Computations
    Algorithms for Grey-Weighted Distance Computations M. Gedda∗ Centre for Image Analysis, Uppsala University, Box 337, SE-751 05, Uppsala, Sweden Abstract With the increasing size of datasets and demand for real time response for interactive applications, improving runtime for algorithms with excessive computational requirements has become increasingly important. Many different algorithms combining efficient priority queues with various helper structures have been proposed for computing grey-weighted distance transforms. Here we compare the performance of popular competitive algorithms in different scenarios to form practical guidelines easy to adopt. The label-setting category of algorithms is shown to be the best choice for all scenarios. The hierarchical heap with a pointer array to keep track of nodes on the heap is shown to be the best choice as priority queue. However, if memory is a critical issue, then the best choice is the Dial priority queue for integer valued costs and the Untidy priority queue for real valued costs. Key words: Grey-weighted distance, Geodesic time, Geodesic distance, Fuzzy distance, Algorithms, Region growing 1. Introduction dependent on content. To improve runtime, propagation us- ing graph-search techniques have become popular [11, 14, 35]. Image analysis measurements are generally performed on bi- Most of the methods are versions of the well-known, theoret- nary representations of the objects. However, when images are ically optimal Dijkstra’s algorithm [10]. The wealth of data acquired, grey levels have specific meanings. Binarisation of structures available for these algorithms makes analysing the such images results in a loss of information and neither the in- computational complexity of all different combinations nontriv- ternal intensities nor the borders of the resulting regions repre- ial.
    [Show full text]
  • Shared-Memory Exact Minimum Cuts ∗
    Shared-memory Exact Minimum Cuts ∗ Monika Henzinger y Alexander Noe z Christian Schulzx Abstract disconnect the network; in VLSI design [21], a minimum The minimum cut problem for an undirected edge- cut can be used to minimize the number of connections weighted graph asks us to divide its set of nodes into between microprocessor blocks; and it is further used two blocks while minimizing the weight sum of the cut as a subproblem in the branch-and-cut algorithm for edges. In this paper, we engineer the fastest known solving the Traveling Salesman Problem and other exact algorithm for the problem. combinatorial problems [27]. State-of-the-art algorithms like the algorithm of As the minimum cut problem has many applications Padberg and Rinaldi or the algorithm of Nagamochi, and is often used as a subproblem for complex problems, Ono and Ibaraki identify edges that can be contracted it is highly important to have algorithms that are able to reduce the graph size such that at least one mini- solve the problem in reasonable time on huge data sets. mum cut is maintained in the contracted graph. Our As data sets are growing substantially faster than pro- algorithm achieves improvements in running time over cessor speeds, a good way to achieve this is efficient par- these algorithms by a multitude of techniques. First, allelization. While there is a multitude of algorithms, we use a recently developed fast and parallel inexact which solve the minimum cut problem exactly on a sin- minimum cut algorithm to obtain a better bound for gle core [12, 14, 17, 24], to the best of our knowledge, the problem.
    [Show full text]
  • Fundamental Data Structures Zuyd Hogeschool, ICT Contents
    Fundamental Data Structures Zuyd Hogeschool, ICT Contents 1 Introduction 1 1.1 Abstract data type ........................................... 1 1.1.1 Examples ........................................... 1 1.1.2 Introduction .......................................... 2 1.1.3 Defining an abstract data type ................................. 2 1.1.4 Advantages of abstract data typing .............................. 5 1.1.5 Typical operations ...................................... 5 1.1.6 Examples ........................................... 6 1.1.7 Implementation ........................................ 7 1.1.8 See also ............................................ 8 1.1.9 Notes ............................................. 8 1.1.10 References .......................................... 8 1.1.11 Further ............................................ 9 1.1.12 External links ......................................... 9 1.2 Data structure ............................................. 9 1.2.1 Overview ........................................... 10 1.2.2 Examples ........................................... 10 1.2.3 Language support ....................................... 11 1.2.4 See also ............................................ 11 1.2.5 References .......................................... 11 1.2.6 Further reading ........................................ 11 1.2.7 External links ......................................... 12 2 Sequences 13 2.1 Array data type ............................................ 13 2.1.1 History ...........................................
    [Show full text]