Advanced Data Structures 1 Introduction 2 Setting the Stage 3

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Data Structures 1 Introduction 2 Setting the Stage 3 3 Van Emde Boas Trees Advanced Data Structures Anubhav Baweja May 19, 2020 1 Introduction Design of data structures is important for storing and retrieving information efficiently. All data structures have a set of operations that they support in some time bounds. For instance, balanced binary search trees such as AVL trees support find, insert, delete, and other operations in O(log n) time where n is the number of elements in the tree. In this report we will talk about 2 data structures that support the same operations but in different time complexities. This problem is called the fixed-universe predecessor problem and the two data structures we will be looking at are Van Emde Boas trees and Fusion trees. 2 Setting the stage For this problem, we will make the fixed-universe assumption. That is, the only elements that we care about are w-bit integers. Additionally, we will be working in the word RAM model: so we can assume that w ≥ O(log n) where n is the size of the problem, and we can do operations such as addition, subtraction etc. on w-bit words in O(1) time. These are fair assumptions to make, since these are restrictions and advantages that real computers offer. So now given the data structure T , we want to support the following operations: 1. insert(T, a): insert integer a into T . If it already exists, do not duplicate. 2. delete(T, a): delete integer a from T , if it exists. 3. predecessor(T, a): return the largest b in T such that b ≤ a, if one exists. 4. successor(T, a): return the smallest b in T such that b ≥ a, if one exists. Note that balanced binary search tree can support all these operations in O(log n) time, so we will try to do better here. 3 Van Emde Boas Trees If we are given that the size of the universe is u, then using vEB trees we can support all the given operations in O(log log u) time [1]. If we are considering all integers of word length w, then we have that u = 2w so our time bound becomes O(log w). Since we are working in the word RAM model, we made the assumption that w ≤ O(log n), so we get that the time is O(log log n), significantly better than the complexity for binary search trees. In order to motivate this complexity, we need to have a recurrence that solves to O(log log u). The p classic example of such a recurrence is T (u) = T ( u) + O(1), so we will strive to get to that point. But first let's incrementally build this data structure in steps. 1 Advanced Data Structures 15-751 3 Van Emde Boas Trees 3.1 The first solution A naive thing we could do is just maintain a bit vector on all possible elements in our universe. This will give us O(1) inserts and deletes, but predecessor and successor can be as bad as O(u). However, we can make the following optimization: we store another bit vector with half the size where each element is an OR of two adjacent entries. Then we will combine adjacent elements of this bit vector to get another one and so on. Figure 1: The data structure for the set S = f1; 2; 3; 7g where u = 8 It is clear that we can do insert and delete in O(log u) time with this modification (just update ancestor blocks, and also maintain a counter for the number of elements present in the range), but now we can also do the predecessor and successor operation in that time. We first find the leaf of the corresponding position, and do the rest in 2 phases: 1. Up phase: We keep going up until we enter a node from the left side such that the right child of the node is also 1. 2. Down phase: From there we go down the left child if there is a 1, otherwise we go down the right child. For example, if we wanted the successor of 1 in Figure 1, then we go up until the 0 − 3 block since its right child also has a 1, and from there we go down to the 2 block, so we get that the successor of 1 is 2. This is particularly nice because we see that the hard case for successorT, a is when a is already in T: otherwise we can make the search for a the Down phase itself: bypassing the Up phase completely. 3.2 Motivation for vEB trees Note that the reason why the above solution has O(log u) complexity is because we divide the entire set into 2 parts at every layer, since the recurrence we are implicitly solving is T (u) = T (u=2)+O(1). p p Since we want to move towards T (u) = T ( u) + O(1) instead, let's divide the set into u parts of p p p u size. Therefore every vEB stores u many vEBs of size u each, and the total height of the tree is O(log log u). When we divided the set into 2 halves, we could just OR the result stored in the 2 halves to compute the result of the set. This is because at the end of the Up phase, there was always only one place p to look: the right child of the node. However now there might be as many as u − 1 many options 2 Advanced Data Structures 15-751 3 Van Emde Boas Trees to pick from, and we cannot go through each one of them since that would destroy our complexity. So we need to figure out some other way to do this. Here is the super clever part: deciding which child to go down on is like solving the successor p p problem again. Since the node has u children, we can enumerate them from 0 to u − 1, and while coming up from child/block i, we can just ask for the successor of i on a bit set that we p maintain over these children. And this can be solved with a vEB tree of size u. So not only p p does a vEB contain u children vEBs with u elements, but we have an extra vEB called the p "summary", which also contains u elements (although these elements are artificial in the sense that they correspond to these child/block numbers that we have assigned to the children). 3.3 Cleanup This is all really cool, but we need to tie up some loose ends. In particular, do we really query this summary vEB on every node in the Up phase? Note that if we do then the recursive formula is p no longer T (u) = T ( u) + O(1). So we can only query the summary at most a constant number of times in the Up phase. In fact, if we just store the maximum stored in each vEB, then we only need to query the summary once: at the end when we flip to the Down phase. In the Up phase, we can check if queried integer is equal to the max. If it is, we continue going up, otherwise we know now that there exists an integer greater in the set that lies within this vEB, so we query the summary and start the Down phase with the appropriate block. Note that in order to support the predecessor operation, we need to do a similar thing and store the minimum. With these additions to our data structure, we have finally achieved O(log log u) time predecessor and successor operations. 3.4 Insert and Delete Now we just need to make sure insertions and deletions can still be supported in O(log log u) time: 1. insert(V, a): Starting at the root, we can figure out which child vEB to go down on in O(1) time. If the number is less or more than the minimum or maximum respectively then we update it in O(1). Now there are two cases: • The child vEB we want to insert a into is not empty. In this case we do not need to update the summary at all, and we can just proceed by inserting a into the child which p takes T ( u) time. • The child vEB we want to insert into is empty. In this case we need to enter the child's a p index into the summary, which will take T ( u) time. Now one might think that we need p to insert a into the child vEB recursively as well and that takes another T ( u) time breaking our recursive formula. However, we know that the vEB is empty to begin with so we have already reached our 'base case' in a way. The remaining cost of insertions will be a total of O(log log u) because that is the height of the tree, and the total cost is p T (u) + O(log log u) where T (u) = T ( u) + O(1), so we are good. 2. delete(V, a): Note that there is nothing to do done if the tree is empty. Just like insert, now we need to consider a few cases: 3 Advanced Data Structures 15-751 3 Van Emde Boas Trees • There is a single element in V. In this case, the Min and Max are equal (which can be checked in O(1) time), so we just check if they are equal to a.
Recommended publications
  • Trans-Dichotomous Algorithms for Minimum Spanning Trees and Shortest Paths
    JOURNAL or COMPUTER AND SYSTEM SCIENCES 48, 533-551 (1994) Trans-dichotomous Algorithms for Minimum Spanning Trees and Shortest Paths MICHAEL L. FREDMAN* University of California at San Diego, La Jolla, California 92093, and Rutgers University, New Brunswick, New Jersey 08903 AND DAN E. WILLARDt SUNY at Albany, Albany, New York 12203 Received February 5, 1991; revised October 20, 1992 Two algorithms are presented: a linear time algorithm for the minimum spanning tree problem and an O(m + n log n/log log n) implementation of Dijkstra's shortest-path algorithm for a graph with n vertices and m edges. The second algorithm surpasses information theoretic limitations applicable to comparison-based algorithms. Both algorithms utilize new data structures that extend the fusion tree method. © 1994 Academic Press, Inc. 1. INTRODUCTION We extend the fusion tree method [7J to develop a linear-time algorithm for the minimum spanning tree problem and an O(m + n log n/log log n) implementation of Dijkstra's shortest-path algorithm for a graph with n vertices and m edges. The implementation of Dijkstra's algorithm surpasses information theoretic limitations applicable to comparison-based algorithms. Our extension of the fusion tree method involves the development of a new data structure, the atomic heap. The atomic heap accommodates heap (priority queue) operations in constant amortized time under suitable polylog restrictions on the heap size. Our linear-time minimum spanning tree algorithm results from a direct application of the atomic heap. To obtain the shortest-path algorithm, we first use the atomic heat as a building block to construct a new data structure, the AF-heap, which has no explicit size restric- tion and surpasses information theoretic limitations applicable to comparison-based algorithms.
    [Show full text]
  • Persistent Predecessor Search and Orthogonal Point Location on the Word
    Persistent Predecessor Search and Orthogonal Point Location on the Word RAM∗ Timothy M. Chan† August 17, 2012 Abstract We answer a basic data structuring question (for example, raised by Dietz and Raman [1991]): can van Emde Boas trees be made persistent, without changing their asymptotic query/update time? We present a (partially) persistent data structure that supports predecessor search in a set of integers in 1,...,U under an arbitrary sequence of n insertions and deletions, with O(log log U) { } expected query time and expected amortized update time, and O(n) space. The query bound is optimal in U for linear-space structures and improves previous near-O((log log U)2) methods. The same method solves a fundamental problem from computational geometry: point location in orthogonal planar subdivisions (where edges are vertical or horizontal). We obtain the first static data structure achieving O(log log U) worst-case query time and linear space. This result is again optimal in U for linear-space structures and improves the previous O((log log U)2) method by de Berg, Snoeyink, and van Kreveld [1995]. The same result also holds for higher-dimensional subdivisions that are orthogonal binary space partitions, and for certain nonorthogonal planar subdivisions such as triangulations without small angles. Many geometric applications follow, including improved query times for orthogonal range reporting for dimensions 3 on the RAM. ≥ Our key technique is an interesting new van-Emde-Boas–style recursion that alternates be- tween two strategies, both quite simple. 1 Introduction Van Emde Boas trees [60, 61, 62] are fundamental data structures that support predecessor searches in O(log log U) time on the word RAM with O(n) space, when the n elements of the given set S come from a bounded integer universe 1,...,U (U n).
    [Show full text]
  • Lecture 1 — 24 January 2017 1 Overview 2 Administrivia 3 The
    CS 224: Advanced Algorithms Spring 2017 Lecture 1 | 24 January 2017 Prof. Jelani Nelson Scribe: Jerry Ma 1 Overview We introduce the word RAM model of computation, which features fixed-size storage blocks (\words") similar to modern integer data types. We also introduce the predecessor problem and demonstrate solutions to the problem that are faster than optimal comparison-based methods. 2 Administrivia • Personnel: Jelani Nelson (instructor) and Tom Morgan (TF) • Course site: people.seas.harvard.edu/~cs224 • Course staff email: [email protected] • Prerequisites: CS 124/125 and probability. • Coursework: lecture scribing, problem sets, and a final project. Students will also assist in grading problem sets. No coding. • Topics will include those from CS 124/125 at a deeper level of exposition, as well as entirely new topics. 3 The Predecessor Problem In the predecessor, we wish to maintain a ordered set S = fX1;X2; :::; Xng subject to the prede- cessor query: pred(z) = maxfx 2 Sjx < zg Usually, we do this by representing S in a data structure. In the static predecessor problem, S does not change between pred queries. In the dynamic predecessor problem, S may change through the operations: insert(z): S S [ fzg del(z): S S n fzg For the static predecessor problem, a good solution is to store S as a sorted array. pred can then be implemented via binary search. This requires Θ(n) space, Θ(log n) runtime for pred, and Θ(n log n) preprocessing time. 1 For the dynamic predecessor problem, we can maintain S in a balanced binary search tree, or BBST (e.g.
    [Show full text]
  • On Data Structures and Memory Models
    2006:24 DOCTORAL T H E SI S On Data Structures and Memory Models Johan Karlsson Luleå University of Technology Department of Computer Science and Electrical Engineering 2006:24|: 402-544|: - -- 06 ⁄24 -- On Data Structures and Memory Models by Johan Karlsson Department of Computer Science and Electrical Engineering Lule˚a University of Technology SE-971 87 Lule˚a, Sweden May 2006 Supervisor Andrej Brodnik, Ph.D., Lule˚a University of Technology, Sweden Abstract In this thesis we study the limitations of data structures and how they can be overcome through careful consideration of the used memory models. The word RAM model represents the memory as a finite set of registers consisting of a constant number of unique bits. From a hardware point of view it is not necessary to arrange the memory as in the word RAM memory model. However, it is the arrangement used in computer hardware today. Registers may in fact share bits, or overlap their bytes, as in the RAM with Byte Overlap (RAMBO) model. This actually means that a physical bit can appear in several registers or even in several positions within one regis- ter. The RAMBO model of computation gives us a huge variety of memory topologies/models depending on the appearance sets of the bits. We show that it is feasible to implement, in hardware, other memory models than the word RAM memory model. We do this by implementing a RAMBO variant on a memory board for the PC100 memory bus. When alternative memory models are allowed, it is possible to solve a number of problems more efficiently than under the word RAM memory model.
    [Show full text]
  • Predecessor Search
    Predecessor Search GONZALO NAVARRO and JAVIEL ROJAS-LEDESMA, Millennium Institute for Foundational Research on Data (IMFD), Department of Computer Science, University of Chile, Santiago, Chile. The predecessor problem is a key component of the fundamental sorting-and-searching core of algorithmic problems. While binary search is the optimal solution in the comparison model, more realistic machine models on integer sets open the door to a rich universe of data structures, algorithms, and lower bounds. In this article we review the evolution of the solutions to the predecessor problem, focusing on the important algorithmic ideas, from the famous data structure of van Emde Boas to the optimal results of Patrascu and Thorup. We also consider lower bounds, variants and special cases, as well as the remaining open questions. CCS Concepts: • Theory of computation → Predecessor queries; Sorting and searching. Additional Key Words and Phrases: Integer data structures, integer sorting, RAM model, cell-probe model ACM Reference Format: Gonzalo Navarro and Javiel Rojas-Ledesma. 2019. Predecessor Search. ACM Comput. Surv. 0, 0, Article 0 ( 2019), 37 pages. https://doi.org/0 1 INTRODUCTION Assume we have a set - of = keys from a universe * with a total order. In the predecessor problem, one is given a query element @ 2 * , and is asked to find the maximum ? 2 - such that ? ≤ @ (the predecessor of @). This is an extension of the more basic membership problem, which only aims to find whether @ 2 -. Both are fundamental algorithmic problems that compose the “sorting and searching” core, which lies at the base of virtually every other area and application in Computer Science (e.g., see [7, 22, 37, 58, 59, 65, 75]).
    [Show full text]
  • Arxiv:1809.02792V2 [Cs.DS] 4 Jul 2019 Symbol Sequences
    Fully-Functional Suffix Trees and Optimal Text Searching in BWT-runs Bounded Space ∗ Travis Gagie1;2, Gonzalo Navarro2;3, and Nicola Prezza4 1 EIT, Diego Portales University, Chile 2 Center for Biotechnology and Bioengineering (CeBiB), Chile 3 Department of Computer Science, University of Chile, Chile 4 Department of Computer Science, University of Pisa, Italy Abstract. Indexing highly repetitive texts | such as genomic databases, software repositories and versioned text collections | has become an important problem since the turn of the millennium. A relevant compressibility measure for repetitive texts is r, the number of runs in their Burrows-Wheeler Transforms (BWTs). One of the earliest indexes for repetitive collections, the Run-Length FM-index, used O(r) space and was able to efficiently count the number of occurrences of a pattern of length m in the text (in loglogarithmic time per pattern symbol, with current techniques). However, it was unable to locate the positions of those occurrences efficiently within a space bounded in terms of r. Since then, a number of other indexes with space bounded by other measures of repetitiveness | the number of phrases in the Lempel-Ziv parse, the size of the smallest grammar generating (only) the text, the size of the smallest automaton recognizing the text factors | have been proposed for efficiently locating, but not directly counting, the occurrences of a pattern. In this paper we close this long-standing problem, showing how to extend the Run-Length FM-index so that it can locate the occ occurrences efficiently within O(r) space (in loglogarithmic time each), and reaching optimal time, O(m + occ), within O(r log logw(σ + n=r)) space, for a text of length n over an alphabet of size σ on a RAM machine with words of w = Ω(log n) bits.
    [Show full text]
  • Dynamic Integer Sets with Optimal Rank, Select, and Predecessor
    Dynamic Integer Sets with Optimal Rank, Select, and Predecessor Search ∗ Mihai Pˇatra¸scu† Mikkel Thorup‡ University of Copenhagen Abstract We present a data structure representing a dynamic set S of w-bit integers on a w-bit word RAM. With S = n and w log n and space O(n), we support the following standard operations in O(log n/ log| |w) time: ≥ insert(x) sets S = S x . • ∪{ } delete(x) sets S = S x . • \{ } predecessor(x) returns max y S y < x . • { ∈ | } successor(x) returns min y S y x . • { ∈ | ≥ } rank(x) returns # y S y < x . • { ∈ | } select(i) returns y S with rank(y)= i, if any. • ∈ Our O(log n/ log w) bound is optimal for dynamic rank and select, matching a lower bound of Fredman and Saks [STOC’89]. When the word length is large, our time bound is also optimal for dynamic predecessor, matching a static lower bound of Beame and Fich [STOC’99] whenever log n/ log w = O(log w/ log log w). Technically, the most interesting aspect of our data structure is that it supports all operations in constant time for sets of size n = wO(1). This resolves a main open problem of Ajtai, Komlos, and Fredman [FOCS’83]. Ajtai et al. presented such a data structure in Yao’s abstract cell-probe model with w-bit cells/words, but pointed out that the functions used could not be implemented. As a partial solution to the problem, Fredman and Willard [STOC’90] introduced a fusion node that could handle queries in constant time, but used polynomial time on the updates.
    [Show full text]
  • MIT 6.851 Advanced Data Structures Prof
    MIT 6.851 Advanced Data Structures Prof. Erik Demaine Spring '12 Scribe Notes Collection TA: Tom Morgan, Justin Zhang Editing: Justin Zhang Contents 1 1. Temporal data structure 1 4 Scribers: Oscar Moll (2012), Aston Motes (2007), Kevin Wang (2007) 1.1 Overview . 4 1.2 Model and definitions . 4 1.3 Partial persistence . 6 1.4 Full persistence . 9 1.5 Confluent Persistence . 12 1.6 Functional persistence . 13 2 2. Temporal data structure 2 14 Scribers: Erek Speed (2012), Victor Jakubiuk (2012), Aston Motes (2007), Kevin Wang (2007) 2.1 Overview . 14 2.2 Retroactivity . 14 3 3. Geometric data structure 1 24 Scribers: Brian Hamrick (2012), Ben Lerner (2012), Keshav Puranmalka (2012) 3.1 Overview . 24 3.2 Planar Point Location . 24 3.3 Orthogonal range searching . 27 3.4 Fractional Cascading . 33 4 4. Geometric data structure 2 35 2 Scribers: Brandon Tran (2012), Nathan Pinsker (2012), Ishaan Chugh (2012), David Stein (2010), Jacob Steinhardt (2010) 4.1 Overview- Geometry II . 35 4.2 3D Orthogonal Range Search in O(lg n) Query Time . 35 4.3 Kinetic Data Structures . 38 5 5. Dynamic optimality 1 42 Scribers: Brian Basham (2012), Travis Hance (2012), Jayson Lynch (2012) 5.1 Overview . 42 5.2 Binary Search Trees . 42 5.3 Splay Trees . 45 5.4 Geometric View . 46 6 6. Dynamic optimality 2 50 Scribers: Aakanksha Sarda (2012), David Field (2012), Leonardo Urbina (2012), Prasant Gopal (2010), Hui Tang (2007), Mike Ebersol (2005) 6.1 Overview . 50 6.2 Independent Rectangle Bounds . 50 6.3 Lower Bounds .
    [Show full text]
  • Open Data Structures (In C++)
    Open Data Structures (in C++) Edition 0.1Gβ Pat Morin Contents Acknowledgments ix Why This Book? xi Preface to the C++ Edition xiii 1 Introduction1 1.1 The Need for Efficiency.....................2 1.2 Interfaces.............................4 1.2.1 The Queue, Stack, and Deque Interfaces.......5 1.2.2 The List Interface: Linear Sequences.........6 1.2.3 The USet Interface: Unordered Sets..........8 1.2.4 The SSet Interface: Sorted Sets............8 1.3 Mathematical Background...................9 1.3.1 Exponentials and Logarithms............. 10 1.3.2 Factorials......................... 11 1.3.3 Asymptotic Notation.................. 12 1.3.4 Randomization and Probability............ 15 1.4 The Model of Computation................... 18 1.5 Correctness, Time Complexity, and Space Complexity... 19 1.6 Code Samples.......................... 22 1.7 List of Data Structures..................... 22 1.8 Discussion and Exercises.................... 25 2 Array-Based Lists 29 2.1 ArrayStack: Fast Stack Operations Using an Array..... 31 2.1.1 The Basics........................ 31 Contents 2.1.2 Growing and Shrinking................. 34 2.1.3 Summary......................... 36 2.2 FastArrayStack: An Optimized ArrayStack......... 36 2.3 ArrayQueue: An Array-Based Queue............. 37 2.3.1 Summary......................... 41 2.4 ArrayDeque: Fast Deque Operations Using an Array.... 41 2.4.1 Summary......................... 43 2.5 DualArrayDeque: Building a Deque from Two Stacks.... 44 2.5.1 Balancing......................... 47 2.5.2 Summary......................... 49 2.6 RootishArrayStack: A Space-Efficient Array Stack..... 50 2.6.1 Analysis of Growing and Shrinking.......... 54 2.6.2 Space Usage....................... 55 2.6.3 Summary......................... 56 2.6.4 Computing Square Roots...............
    [Show full text]
  • LZ-End Parsing in Linear Time
    LZ-End Parsing in Linear Time Dominik Kempa1 and Dmitry Kosolobov2 1 University of Helsinki, Helsinki, Finland [email protected] 2 University of Helsinki, Helsinki, Finland [email protected] Abstract We present a deterministic algorithm that constructs in linear time and space the LZ-End parsing (a variation of LZ77) of a given string over an integer polynomially bounded alphabet. 1998 ACM Subject Classification E.4 Data compaction and compression Keywords and phrases LZ-End, LZ77, construction algorithm, linear time Digital Object Identifier 10.4230/LIPIcs.ESA.2017.53 1 Introduction Lempel–Ziv (LZ77) parsing [34] has been a cornerstone of data compression for the last 40 years. It lies at the heart of many common compressors such as gzip, 7-zip, rar, and lz4. More recently LZ77 has crossed-over into the field of compressed indexing of highly repetitive data that aims to store repetitive databases (such as repositories of version control systems, Wikipedia databases [31], collections of genomes, logs, Web crawls [8], etc.) in small space while supporting fast substring retrieval and pattern matching queries [4, 7, 13, 14, 15, 23, 27]. For this kind of data, in practice, LZ77-based techniques are more efficient in terms of compression than the techniques used in the standard compressed indexes such as FM-index and compressed suffix array (see [24, 25]); moreover, often the space overhead of these standard indexes hidden in the o(n) term, where n is the length of the uncompressed text, turns out to be too large for highly repetitive data [4]. One of the first and most successful indexes for highly repetitive data was proposed by Kreft and Navarro [24].
    [Show full text]
  • Advanced Rank/Select Data Structures: Succinctness, Bounds, and Applications
    Universita` degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Settore Scientifico Disciplinare: INF/01 Ph.D. Thesis Advanced rank/select data structures: succinctness, bounds, and applications Alessio Orlandi Supervisor Roberto Grossi Referee Referee Referee Veli M¨akinen Gonzalo Navarro Kunihiko Sadakane December 19, 2012 Acknowledgements This thesis is the end of a long, 4 year journey full of wonders, obstacles, aspirations and setbacks. From the person who cautiously set foot in Pisa to the guy who left his own country, none of these steps would have been possible without the right people at the right moment, supporting me through the dark moments of my life and being with me enjoying the bright ones. First of all, I need to thank Prof. Roberto Grossi, who decided that he could accept me as a Ph.D. Student. Little he knew of what was ahead :). Still, he bailed me out of trouble multiple times, provided invaluable support and insight, helping me grow both as a person and as a scientist. I also owe a lot to Prof. Rajeev Raman, who has been a valuable, patient and welcoming co-author. I also want to thank Proff. Sebastiano Vigna and Paolo Boldi, who nurtured my passion for computer science and introduced me to their branch of research. Also Prof. Degano, as head of the Ph.D. course, provided me with non-trivial help during these years. In Pisa, I met a lot of interesting people who were crucial for my happiness, was it when studying on a Sunday, on a boring the-proof-does-not-work-again day, in moments of real need, or on a Saturday night.
    [Show full text]
  • Suggested Final Project Topics
    CS166 Handout 10 Spring 2019 April 25, 2019 Suggested Final Project Topics Here is a list of data structures and families of data structures we think you might find interesting topics for a final project. You're by no means limited to what's contained here; if you have another data structure you'd like to explore, feel free to do so! If You Liked Range Minimum Queries, Check Out… Range Semigroup Queries In the range minimum query problem, we wanted to preprocess an array so that we could quickly find the minimum element in that range. Imagine that instead of computing the minimum of the value in the range, we instead want to compute A[i] ★ A[i+1] ★ … ★ A[j] for some associative op- eration ★. If we know nothing about ★ other than the fact that it's associative, how would we go about solving this problem efficiently? Turns out there are some very clever solutions whose run- times involve the magical Ackermann inverse function. Why they're worth studying: If you really enjoyed the RMQ coverage from earlier in the quarter, this might be a great way to look back at those topics from a different perspective. You'll get a much more nuanced understanding of why our solutions work so quickly and how to adapt those tech- niques into novel settings. Lowest Common Ancestor and Level Ancestor Queries Range minimum queries can be used to solve the lowest common ancestors problem: given a tree, preprocess the tree so that queries of the form “what node in the tree is as deep as possible and has nodes u and v as descendants?” LCA queries have a ton of applications in suffix trees and other al- gorithmic domains, and there’s a beautiful connection between LCA and RMQ that gave rise to the first ⟨O(n), O(1)⟩ solution to both LCA and RMQ.
    [Show full text]