CS-1996-05 Range Searching1 Pankaj K. Agarwal2 Department Of

Total Page:16

File Type:pdf, Size:1020Kb

CS-1996-05 Range Searching1 Pankaj K. Agarwal2 Department Of CS-1996-05 1 Range Searching 2 Panka j K. Agarwal Department of Computer Science DukeUniversity Durham, North Carolina 27708{0129 Septemb er 8, 1996 1 Work ion this pap er has b een supp orted by National Science Foundation Grant CCR-93{01259, byanArmyResearch Oce MURI grantDAAH04-96-1-0013, by a Sloan fellowship, byanNYI award, and by matching funds from Xerox Corp oration. 2 Department of Computer Science, Duke University, [email protected] Range Searching 1 RANGE SEARCHING Pankaj K. Agarwal INTRODUCTION Range searching is one of the central problems in computational geometry, b ecause it arises in many applications and a wide variety of geometric problems can b e formulated as a range-searching problem. Atypical range-searching problem has d the following form. Let S b e a set of n points in R , and let R b e a family of subsets; elements of R are called ranges . We wish to prepro cess S into a data structure so that for a query range R , the p oints in S \ R can b e rep orted or counted eciently. Typical examples of ranges include rectangles, halfspaces, simplices, and balls. If we are only interested in answering a single query, it can b e done in linear time, using linear space, by simply checking for each point p 2 S whether p lies in the query range. However, most of the applications call for querying the same point set S several times (or sometimes we also insert or delete a p oint p erio dically), in which case wewould liketoanswer a query faster by prepro cessing S into a data structure. Range counting and range reporting are just two instances of range-searching queries. Other examples include emptiness queries , where one wants to determine whether S \ R = ;, and extremal queries , where one wants to return a p oint with certain prop erty (e.g., returning a p oint with the largest x -co ordinate). In order to 1 encompass all di erenttyp es of range-searching queries, a general range-searching problem can be de ned as follows. Let (S; +) be a semigroup. For each point p 2 S , we assign a weight w (p) 2 S. For a query range R 2 R, we wish to P compute w (p). For example, range-counting queries can b e answered by p2S \R setting w (p) = 1 for every p 2 S and cho osing the semigroup to b e (Z; +), where + denotes the integer addition; range-emptiness queries by setting w (p)=1and cho osing the semigroup to be (f0; 1g; _); and range-rep orting queries by setting S w (p)=fpg and cho osing the semigroup to b e (2 ; [). Most of the range-searching data structures construct a family of `canonical' subsets of S , and for each canonical subset C , they store the weight w (A) = P w (p). For a query range r , the data structure searches for a small subfamily p2A S k of disjoint canonical subsets, A ;:::;A , so that A = r \ S , and then com- 1 k i i=1 P k putes w (A ). In order to exp edite the search, the structure also stores some i i=1 auxiliary information. Typically, the canonical subsets are organized in a tree-like data structure, each of whose no de v is asso ciated with a canonical subset A; v stores the weight w (A) and some auxiliary information. A query is answered by searching the tree in a top-down fashion, using the auxiliary information to guide the search. 2 Pankaj K. Agarwal MODEL OF COMPUTATION The p erformance of a data structure is measured by the time sp ent in answer- ing a query, called the query time and denoted by Q(n; d); by the size of the data structure, denoted by S (n; d); and by the time constructed in the data structure, called the preprocessing time and denoted by P (n; d). Since the data structure is constructed only once, its query time and size are more imp ortant than its prepro- cessing time. If a data structure supp orts insertion and deletion op erations, the update time is also relevant. We should remark that the query time of a range- rep orting query on any reasonable machine dep ends on the output size, so the query time for a range-rep orting query consists of two parts | search time ,which dep ends only on n and d,andreporting time, which dep ends on n; d, and the output size. Throughout this survey pap er we will use k to denote the output size. We assume that d is a small xed constant, and that the big-Oh notation hides constants dep ending on d. The dep endence on d of the p erformance of all the data structures mentioned here is exp onential, which makes them unsuitable for large values of d. We assume that each memory cell can store log n bits. The upp er b ounds will be given on pointer-machine or RAM mo dels, which are describ ed in [15, 107]. The main di erence between the two mo dels is that on the pointer machine a memory cell can b e accessed only through a series of p ointers while in the RAM mo del any memory cell can be accessed in constanttime. Most of the lower b ounds will b e given in the so-called semigroup model , whichwas originally intro duced byFredman [61] and which is muchweaker than the pointer machine or the RAM mo del. In the arithmetic mo del, a data structure is regarded as a set of precomputed sums in the underlying semigroup. The size of the data structure is the number of sums stored, and the query time is the number of semigroup op erations p erformed (on the precomputed sums) to answer a query; the query time ignores the cost of various auxiliary op erations, e.g., the cost of determining which of the precomputed sums should b e added to answer a query. Aweakness of the semigroup mo del is that it do es not allow subtractions even if the weights of p oints b elong to a group. Therefore, we will also consider the group model , in which b oth additions and subtractions are allowed. The size of any range-searching data structure is at least linear, for it has to store eachpoint(oritsweight) at least once, and the query time on any reasonable mo del of computation (e.g., pointer machine, RAM) is (log n) even for d = 1. Therefore, one would liketodevelop a linear-size data structure with logarithmic query time. Although near-linear-size data structures are known for orthogonal range searching in any xed dimension that can answer a query in p olylogarithmic time, no similar b ounds are known for range searching with more complex ranges (e.g., simplex, disks). In such cases, one seeks for a tradeo between the query time and the size of the data structure | how fast can a query b e answered using O (1) O (1) n log n space, howmuch space is required to answer a query in log n time, or what kind of tradeo b etween the size and the query time can b e achieved? The chapter is organized as follows. In Section 32.1 we review the orthogo- nal range-searching data structures, and in Section 32.2 we review simplex range- searching data structures. Section 32.3 surveys other variants and extensions of range searching. We study intersection-searching problems in Section 32.4, which Range Searching 3 can b e regarded as a generalization of range searching. Finally, Section 32.5 deals with various optimization queries. 1 ORTHOGONAL RANGE SEARCHING In the d-dimensional orthogonal range searching, the ranges are d-rectangles, Q d eachoftheform [a ;b ], where a ;b 2 R. This is an abstraction of the `multi- i i i i i=1 key' searching; see [21, 115]. For example, the points of S may corresp ond to employees of a company,each co ordinate corresp onding to a key such as age, salary, exp erience, etc. The queries of the form | rep ort all employees b etween the ages of 30 and 40 who earn more than $30; 000 and who haveworked for more than 5 years | can b e formulated as an orthogonal range-rep orting query. Because of its numerous applications, orthogonal range searching has b een studied extensively for the last 25 years. Asurvey of earlier results can b e found in the b o oks by Mehlhorn [88] and Preparata and Shamos [99]. In this section we review the more recentdata structures and the lower b ounds. GLOSSARY EPM Apointer machine with + op eration. APM A p ointer machine with basic arithmetic and shift op erations. Faithful semigroup A semigroup (S; +) is called faithful if for each n>0, for any T ;T f1; ::: ;ng so that T 6= T , and for every sequence of integers 1 2 1 2 ; > 0(i 2 T ;j 2 T ), there are s ;s ; ::: ;s 2 S suchthat i j 1 2 1 2 n X X s 6= s : i i j j i2T j 2T 1 2 Notice that (R ; +) is a faithful semigroup, but (f0; 1g; ) is not a faithful semi- group. UPPER BOUNDS Most of the recent orthogonal range-searching data structures are based on range trees , intro duced by Bentley [20].
Recommended publications
  • The Geodesic $2 $-Center Problem in a Simple Polygon
    The geodesic 2-center problem in a simple polygon∗ Eunjin Ohy Jean-Lou De Carufelz Hee-Kap Ahny§ Abstract The geodesic k-center problem in a simple polygon with n vertices consists in the follow- ing. Find a set S of k points in the polygon that minimizes the maximum geodesic distance from any point of the polygon to its closest point in S. In this paper, we focus on the case where k = 2 and present an exact algorithm that returns a geodesic 2-center in O(n2 log2 n) time. 1 Introduction The geodesic k-center problem in a simple polygon P with n vertices consists in the following. Find a set S of k points in P that minimizes max min d(s; p); p2P s2S where d(x; y) is the length of the shortest path between x and y lying in P (also called geodesic distance). The set S is called a k-center of P . Geometrically, this is equivalent to find k smallest-radius geodesic disks with the same radius whose union contains P . The 2-dimensional Euclidean k-center problem is similar to the geodesic k-center problem in a simple polygon P . The only difference is that in the Euclidean k-center problem, the distance between two points x and y is their Euclidean distance, denoted by x y . That is, given a k − k set of n points in the plane, find a set S of k points in R2 that minimizes P max min p s : p2P s2S k − k Computing a k-center of points is a typical problem in clustering.
    [Show full text]
  • Walking Your Dog in the Woods in Polynomial Time Erin Wolf Chambers, Eric Colin De Verdire, Jeff Erickson, Sylvain Lazard, Francis Lazarus, Shripad Thite
    Walking Your Dog in the Woods in Polynomial Time Erin Wolf Chambers, Eric Colin de Verdire, Jeff Erickson, Sylvain Lazard, Francis Lazarus, Shripad Thite To cite this version: Erin Wolf Chambers, Eric Colin de Verdire, Jeff Erickson, Sylvain Lazard, Francis Lazarus, et al.. Walking Your Dog in the Woods in Polynomial Time. SoCG 2008 - 24th Annual Sympo- sium on Computational Geometry, Jun 2008, College Park, Maryland, United States. pp.101–109, 10.1145/1377676.1377694. inria-00336497 HAL Id: inria-00336497 https://hal.inria.fr/inria-00336497 Submitted on 4 Nov 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. WALKING YOUR DOG IN THE WOODS IN POLYNOMIAL TIME ERIN WOLF CHAMBERS, ERIC´ COLIN DE VERDIERE,` JEFF ERICKSON, SYLVAIN LAZARD, FRANCIS LAZARUS, AND SHRIPAD THITE Abstract. The Fr´echet distance between two curves in the plane is the minimum length of a leash that allows a dog and its owner to walk along their respective curves, from one end to the other, without backtracking. We propose a natural extension of Fr´echet distance to more general metric spaces, which requires the leash itself to move continuously over time.
    [Show full text]
  • Cole's Parametric Search
    CCCG 2013, Waterloo, Ontario, August 8{10, 2013 Cole's Parametric Search Technique Made Practical Michael T. Goodrich Pawe lPszona Dept. of Computer Science Dept. of Computer Science University of California, Irvine University of California, Irvine Abstract the maximum for which B is true. To achieve this goal, the parametric search approach utilizes two algorithms. Parametric search has been widely used in geometric al- The first algorithm, C, is a sequential decision algorithm gorithms. Cole's improvement provides a way of saving for B that can determine if a given λ is less than, equal a logarithmic factor in the running time over what is to, or greater than λ∗. The second algorithm, A, is achievable using the standard method. Unfortunately, a generic parallel algorithm whose inner workings are this improvement comes at the expense of making an al- driven by \comparisons," which are either independent ready complicated algorithm even more complex; hence, of λ or depend on the signs of low-degree polynomi- this technique has been mostly of theoretical interest. In als in λ. Because A works in parallel, its comparisons this paper, we provide an algorithm engineering frame- come in batches, so there are several independent such work that allows for the same asymptotic complexity comparisons that occur at the same time. The idea, to be achieved probabilistically in a way that is both then, is to run A on the input that depends on the un- simple and practical (i.e., suitable for actual implemen- known value λ∗, which will result in actually finding tation). The main idea of our approach is to show that that value as a kind of by-product (even though we do a variant of quicksort, known as boxsort, can be used to not know λ∗, C can be used to resolve comparisons that drive comparisons, instead of using a sorting network, appear during the execution of A).
    [Show full text]
  • Parametric Search Made Practical
    Parametric Search Made Practical Rene´ van Oostrum Remco C. Veltkamp institute of information and computing sciences, utrecht university technical report UU-CS-2002-050 www.cs.uu.nl Parametric Search Made Practical£ Ren´e van Oostrum and Remco C. Veltkamp December 2002 Abstract In this paper we show that in sorting-based applications of parametric search, Quicksort can replace the parallel sorting algorithms that are usually advocated. Because of the simplicity of Quicksort, this may lead to applications of parametric search that are not only efficient in theory, but in practice as well. Also, we argue that Cole’s optimization of certain parametric-search algorithms may be unnecessary under realistic assumptions about the input. Furthermore, we present a generic, flexible, and easy-to-use framework that greatly simplifies the implementation of algorithms based on parametric search. We use our framework to implement an algorithm that solves the Fr´echet-distance problem. The implementation based on parametric search is faster than the binary-search approach that is often suggested as a practical replacement for the parametric-search technique. 1 Introduction Since the late 1980s, parametric search, the optimization technique developed by Megiddo in the late 1970s and early 1980s [15, 16], has become an important tool for solving many geometric optimization queries efficiently. The main principle of parametric search is to compute a value λ £ that optimizes an objective function f with the use of an algorithm As that solves the corresponding decision problem. The decision £ £ £ λ λ = λ λ > λ problem can be stated as follows: given a value λ, decide whether λ < , ,or .
    [Show full text]
  • 44 RANDOMIZATION and DERANDOMIZATION Otfried Cheong, Ketan Mulmuley, and Edgar Ramos
    44 RANDOMIZATION AND DERANDOMIZATION Otfried Cheong, Ketan Mulmuley, and Edgar Ramos INTRODUCTION Randomization as an algorithmic technique goes back at least to the seventies. In a seminal paper published in 1976, Rabin uses randomization to solve a geomet- ric problem, giving an algorithm for the closest-pair problem with expected linear running time [Rab76]. Randomized and probabilistic algorithms and constructions were applied successfully in many areas of theoretical computer science. Following influential work in the mid-1980s, a significant proportion of published research in computational geometry employed randomized algorithms or proof techniques. Even when both randomized and deterministic algorithms of comparable asymp- totic complexity are available, the randomized algorithms are often much simpler and more efficient in an actual implementation. In some cases, the best deter- ministic algorithm known for a problem has been obtained by \derandomizing" a randomized algorithm. This chapter focuses on the randomized algorithmic techniques being used in computational geometry, and not so much on particular results obtained using these techniques. Efficient randomized algorithms for specific problems are discussed in the relevant chapters throughout this Handbook. GLOSSARY Probabilistic or \Monte Carlo" algorithm: Traditionally, any algorithm that uses random bits. Now often used in contrast to randomized algorithm to denote an algorithm that is allowed to return an incorrect or inaccurate result, or fail completely, but with small probability. Monte Carlo methods for numerical integration provide an example. Algorithms of this kind have started to play a larger role in computational geometry in the 21st century (Section 44.8). Randomized or \Las Vegas" algorithm: An algorithm that uses random bits and is guaranteed to produce a correct answer; its running time and space re- quirements may depend on random choices.
    [Show full text]
  • In-Place Randomized Slope Selection
    In-Place Randomized Slope Selection Henrik Blunck∗ Jan Vahrenhold† Abstract In addition to theoretical considerations, one reason for investigating space-efficient algorithms is that they Slope selection, i.e. selecting the slope with rank k have the potential of using the different stages of hier- n among all 2 lines induced by a collection P of points, archical memory, e.g., caches, to a much higher degree results in a widely used robust estimator for line- of efficiency. Another motivation, especially for de- fitting. In this paper, we demonstrate that it is possi- signing algorithms for statistical data analysis, comes ble to perform slope selection in expected O(n·log2 n) from the recently increased interest in sensor networks time using only constant extra space in addition to the where small-scale computing devices are used to col- space needed for representing the input. lect large amounts of data. Since the memory of such sensor devices usually is very limited, and since trans- mitting data is much more costly than local computa- 1 Introduction tion, it is desirable to process as much data as possible Computing a line estimator, i.e., fitting a line to a locally before transmitting (intermediate) results. collection P of n data points {p1, . , pn} in the plane is a frequent task in statistical analysis. A frequently Our Results In this paper we show how to solve used robust line estimator is the so-called Theil-Sen the slope selection problem in expected optimal estimator (see [13] and the references therein) which O(n log2 n) time while at the same time using only n constant extra space.
    [Show full text]
  • An Efficient Search Algorithm for Minimum Covering Polygons on the Sphere∗
    SIAM J. SCI. COMPUT. c 2013 Society for Industrial and Applied Mathematics Vol. 35, No. 3, pp. A1669–A1688 AN EFFICIENT SEARCH ALGORITHM FOR MINIMUM COVERING POLYGONS ON THE SPHERE∗ † NING WANG For my mother, in loving memory Abstract. One of the computationally intensive tasks in the numerical simulation of dynamic systems discretized on an unstructured grid over the sphere is to find a number of spherical minimum covering polygons of given locations, whose vertices are chosen from the grid points. Algorithms have been proposed attempting to perform this task efficiently. However, these algorithms only reduce the linear search time for each polygon vertex candidate by a constant factor, and their polygon search algorithms are mostly heuristic and tailored for specific classes of grids. With the increase in grid resolution, the number of unstructured grid types, and dynamic generation of variable resolution grids, these algorithms are no longer suitable for the computational task. It is necessary to develop a more general, efficient, and robust search algorithm. We propose an algorithm, built on a modified kd-tree algorithm, to search for minimum covering polygons of given locations from a set of grid points on the sphere. After an O(n log n) time initialization to construct the kd-tree from n grid points, the proposed algorithm takes an O(log n) expected time to find the minimum covering polygon for a given location on the sphere (or the same expected asymptotic time with a smaller constant to obtain an approximate solution). We present the modified kd-tree algorithm, showing its applicability to the search problem.
    [Show full text]
  • Los Alamos New Mexico 87545 Network Improvement Problems
    Los Alamos National Laboratory is operated by the University of California for the United States Department of Energy under contract W-7405-ENG-36 TITLE: NETWORK IMPROVEMENT PROBLEMS AUTHOR(S): S.O. Krumke, H. Noltemeier, K.U. Drangmeister, M.V. Marathe, S.S. Ravi SUBMllTED TO: ACM Symposium on Discrete Algorithms Atlanta, GA January, 1995 By acceptance of this article, the publisher recognizes that the US. Government retains a nonexclusive royalty-free license to publish or reproduce the published form of this contribution or to allow others to do so, for US.Government purposes. The Los Alamos National Laboratoly requests that the publisher identify this article as work performed underthe auspices of the US. Department of Energy. Los Alamos National Laboratory Los Alamos New Mexico 87545 Network Improvement Problems S. 0. Krumke' H. Noltemeier' K. U. Drangmeister' . M. V. Marathe2 S. S. Ravi3 July 14, 1995 Abstract We study budget constrained optimal network improvement problem. Such problems aim at finding optimal strategies for improving a network under some cost measure subject to certain budget con- straints. As an example, consider the following prototypical problem: Let G = (V, E) be an undirected graph with two cost values L(e) and C(e)associated with each edge e, where L(e) denotes the length of e and C(e) denotes the cost of reducing the length of e by a unit amount. A reduction strategy specifies for each edge e, the amount by which L(e) is to be reduced. For a given budget B, the goal is to find a reduction strategy such that the total cost of reduction is at most B and the minimum cost tree (with respect to some measure M) under the mo~edL costs is the best over all possible reduction strategies which obey the budget constraint.
    [Show full text]
  • Cole's Parametric Search Technique Made Practical
    CCCG 2013, Waterloo, Ontario, August 8–10, 2013 Cole’s Parametric Search Technique Made Practical Michael T. Goodrich Paweł Pszona Dept. of Computer Science Dept. of Computer Science University of California, Irvine University of California, Irvine Abstract sion algorithm for B that can determine if a given λ is less λ∗ Parametric search has been widely used in geometric algo- than, equal to, or greater than . The second algorithm, A generic rithms. Cole’s improvement provides a way of saving a log- , is a parallel algorithm whose inner workings are λ arithmic factor in the running time over what is achievable driven by “comparisons,” which are either independent of λ using the standard method. Unfortunately, this improvement or depend on the signs of low-degree polynomials in . Be- A comes at the expense of making an already complicated al- cause works in parallel, its comparisons come in batches, gorithm even more complex; hence, this technique has been so there are several independent such comparisons that occur A mostly of theoretical interest. In this paper, we provide an at the same time. The idea, then, is to run on the input that λ∗ algorithm engineering framework that allows for the same depends on the unknown value , which will result in actu- asymptotic complexity to be achieved probabilistically in a ally finding that value as a kind of by-product (even though λ∗ C way that is both simple and practical (i.e., suitable for actual we do not know , can be used to resolve comparisons A implementation). The main idea of our approach is to show that appear during the execution of ).
    [Show full text]
  • 30 POLYGONS Joseph O’Rourke, Subhash Suri, and Csaba D
    30 POLYGONS Joseph O'Rourke, Subhash Suri, and Csaba D. T´oth INTRODUCTION Polygons are one of the fundamental building blocks in geometric modeling, and they are used to represent a wide variety of shapes and figures in computer graph- ics, vision, pattern recognition, robotics, and other computational fields. By a polygon we mean a region of the plane enclosed by a simple cycle of straight line segments; a simple cycle means that nonadjacent segments do not intersect and two adjacent segments intersect only at their common endpoint. This chapter de- scribes a collection of results on polygons with both combinatorial and algorithmic flavors. After classifying polygons in the opening section, Section 30.2 looks at sim- ple polygonizations, Section 30.3 covers polygon decomposition, and Section 30.4 polygon intersection. Sections 30.5 addresses polygon containment problems and Section 30.6 touches upon a few miscellaneous problems and results. 30.1 POLYGON CLASSIFICATION Polygons can be classified in several different ways depending on their domain of application. In chip-masking applications, for instance, the most commonly used polygons have their sides parallel to the coordinate axes. GLOSSARY Simple polygon: A closed region of the plane enclosed by a simple cycle of straight line segments. Convex polygon: The line segment joining any two points of the polygon lies within the polygon. Monotone polygon: Any line orthogonal to the direction of monotonicity inter- sects the polygon in a single connected piece. Star-shaped polygon: The entire polygon is visible from some point inside the polygon. Orthogonal polygon: A polygon with sides parallel to the (orthogonal) coordi- nate axes.
    [Show full text]
  • Geodesic Fréchet Distance Inside a Simple Polygon Atlas Cook Iv, Carola Wenk
    Geodesic Fréchet Distance Inside a Simple Polygon Atlas Cook Iv, Carola Wenk To cite this version: Atlas Cook Iv, Carola Wenk. Geodesic Fréchet Distance Inside a Simple Polygon. STACS 2008, Feb 2008, Bordeaux, France. pp.193-204. hal-00221494 HAL Id: hal-00221494 https://hal.archives-ouvertes.fr/hal-00221494 Submitted on 28 Jan 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Symposium on Theoretical Aspects of Computer Science 2008 (Bordeaux), pp. 193-204 www.stacs-conf.org GEODESIC FRECHET´ DISTANCE INSIDE A SIMPLE POLYGON ATLAS F. COOK IV AND CAROLA WENK Department of Computer Science, University of Texas at San Antonio One UTSA Circle, San Antonio, TX 78249-0667 E-mail address: {acook,carola}@cs.utsa.edu Abstract. We unveil an alluring alternative to parametric search that applies to both the non-geodesic and geodesic Fr´echet optimization problems. This randomized approach is based on a variant of red-blue intersections and is appealing due to its elegance and practical efficiency when compared to parametric search. We present the first algorithm for the geodesic Fr´echet distance between two polygonal curves A and B inside a simple bounding polygon P .
    [Show full text]
  • Linear-Time Algorithms for Parametric Minimum Spanning Tree Problems on Planar Graphs David Fernández-Baca Iowa State University, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Digital Repository @ Iowa State University Computer Science Technical Reports Computer Science 1995 Linear-Time Algorithms for Parametric Minimum Spanning Tree Problems on Planar Graphs David Fernández-Baca Iowa State University, [email protected] Giora Slutzki Iowa State University, [email protected] Follow this and additional works at: http://lib.dr.iastate.edu/cs_techreports Part of the Theory and Algorithms Commons Recommended Citation Fernández-Baca, David and Slutzki, Giora, "Linear-Time Algorithms for Parametric Minimum Spanning Tree Problems on Planar Graphs" (1995). Computer Science Technical Reports. 90. http://lib.dr.iastate.edu/cs_techreports/90 This Article is brought to you for free and open access by the Computer Science at Iowa State University Digital Repository. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Linear-Time Algorithms for Parametric Minimum Spanning Tree Problems on Planar Graphs Abstract A linear-time algorithm for the minimum-ratio spanning tree problem on planar graphs is presented. The algorithm is based on a new planar minimum spanning tree algorithm. The ppra oach extends to other parametric minimum spanning tree problems on planar graphs and to other families of graphs having small separators. Disciplines Theory and Algorithms This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/cs_techreports/90 Linear-Time Algorithms for Parametric Minimum Spanning Tree Problems on Planar Graphs y David Fernandez-Baca and Giora Slutzki Department of Computer Science, Iowa State University, Ames, IA 50011 Abstract A linear-time algorithm for the minimum-ratio spanning tree prob- lem on planar graphs is presented.
    [Show full text]