Optimal Bounds for the Predecessor Problem
Total Page:16
File Type:pdf, Size:1020Kb
Optimal Bounds for the Predecessor Problem Paul Beame* Faith E. Ficht Computer Science and Engineering Department of Computer Science University of Washington University of Toronto Seattle, WA, USA 981952350 Toronto, Ontario, Canada M5S lA4 [email protected] [email protected] Abstract The static dictionary problem is to store a fixed set and perform membership queries on it; the static predecessor We obtain matching upper and lower bounds for the amount problem allows predecessor queries. If insertions and dele- of time to find the predecessor of a given element among the tions may also be performed on the set, we have the dynamic elements of a fixed efficiently stored set. Our algorithms are dictionary problem and the dynamic predecessor problem, for the unit-cost word-level RAM with multiplication and ex- respectively. tend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, The complexities of searching (for example, performing but they apply to the cell probe and RAM models and to both membership or predecessor queries) and sorting have been static and dynamic predecessor problems. long and well understood, under the assumption that ele- ments are abstract objects which may only be compared. But many efficient algorithms, including hashing, bucket sort, 1 Introduction and radix sort, perform word-level operations, such as indi- Many problems in computer science involve storing a set S of rect addressing using the elements themselves or values de- integers and performing queries on that set. The most basic rived from them. query is the membership query, which determines whether a Often, such algorithms are applied only when the number given integer x is in the set. A predecessor query returns the of bits to represent individual elements is very small in com- predecessor pred(x,S) of x in S, that is, the largest element parison with the number of elements in the set. Otherwise, of the set S that is less than x. If there is no predecessor those algorithms consume huge amounts of space. For ex- (which is the case when x is smaller than or equal to the ample, van Emde Boas trees [25,24] can be used to perform minimum element of S), then a default value, for example, 0, predecessor queries on any set of integers from a universe of is returned. size N in O(loglogN) time, but they require R(N) space. Predecessor queries can be used to efficiently perform However, there have been important algorithmic break range searches (i.e. find all elements of S between given throughs showing that such techniques have more general integers x and x’), They can also be used to obtain certain applicability. For example, with two level perfect hashing information about arbitrary integers (for example, their rank [16], any n element set can be stored in O(n) space and in S) that is only stored for the elements of S. Priority queues constant time membership queries can be performed. Fu- can be implemented using data structures that support inser- sion trees fully exploit unit-cost word-level operations and tion, deletion, and predecessor (or, equivalently, successor) the fact that data elements need to fit in words of memory to queries. store static sets of size n in O(n) space and perform prede- ‘Researchsupponed by the NationalScience Foundation under grants cessor queries in O(e) time 1181. CCR-9303017 and CCR-9800124 For the static predecessor problem, it has been widely conjectured that the time complexities achieved by van Emde Boas vees and fusion trees are optimal for any data structure using a reasonable amount of space. We prove that this is NOT the case. Specifically, we construct a new data struc- ture that stores n element sets of integers from a universe of size N in n’(1) space and performs predecessor queries in 0 (min{loglogN/logloglogN, Jlogn/loglogn}) time. Using recent generic transformations of Andersson and Tho- rap [4,6], the algorithm can be made dynamic and the space CopyTight ACM 1999 1.58113.067.8/99,05...$5.00 295 improved to O(n). Their original algorithm used multiplication and division of We also obtain matching lower bounds, improving Mil- log2N bit words to compute the hash functions. Recently it tersen’s n(m) and fi((logn)‘/‘) lower bounds in was shown that hash functions of the form the powerful communication game model [Zl, 221. The key to our improved lower bounds is to use a more compli- cqtted input distribution. Unfortunately, this leads to a more. for i- 5 [log2 n1 , suffice for implementing the two level per- complicated analysis. However, the understanding we ob- fect hashing data structure [ 11, 131. Notice that the eval- tained from identifying and working with a hard distribu- uation of such functions does not depend on constant time tion in the communication game model directly led to the division instructions; a right shift suffices. These algorithms davelopment of our algorithm. This approach has also been can be made dynamic with the same constant time for mem- used to obtain an Q(loglogd/logloglogd) lower bound for bership queries and with constant expected amortized update the approximate nearest neighbour problem over the universe time [ 13, 14, 121, by rehashing when necessary (using ran- 0% 1Y [91. domly chosen hash functions). For the static problem, Ra- man [23] proves that it is possible to choose such hash func- 2 Related Work tions deterministically in O(n’logN) time. Although hashing provides an optimal solution to the The simplest model in which the dictionary and predecessor static dictionary problem, it is not directly applicable to the problems have been considered is the comparison model in predecessor problem. Another data structure, van Emde which the only operations allowed that involve x are com- Boas (or stratified) trees [25, 241, is useful for both static parisons between x and elements of S. Using binary search, and dynamic versions of the predecessor problem. These predecessor and membership queries can be performed in trees support membership queries, predecessor queries, and O(logn) time on a static set of size II stored in a sorted array. updates in O(loglogN) time. The set is stored in a binary Balanced binary trees (for example, Red-Black trees, AVL trie and binary search is performed on the log*N bits used trees, or 2-3 trees) can be used in this model to solve the dy- to represent individual elements. The major drawback is the namic dictionary and predecessor problems in O(logn) time. use of an extremely large amount of space: R(N) words. A standard information theory argument proves that logzn Willard’s x-fast trie data structure [26] uses the same ap- comparisons are needed in the worst case even for perform proach, but reduces the space to O(nlogN) by using perfect ing membership queries in a static set. hashing to represent the nodes at each level of the trie. In y- One can obtain faster algorithms when x or some func- fast tries, the space is further reduced to O(n) by only storing tion computed from x can be used for indirect addressing. Q(n/logN) approximately evenly spaced elements of the set If S is a static set from a universe of size N, one can triv- Sin the x-fast trie. A binary search tree is used to represent ially perform predecessor queries using N words of memory the subset of O(logN) elements of S lying between each pair in constant time: Simply store the answer to each possible of consecutive elements in the trie. Both of Willard’s data qyery x in a separate location. This works for more gen- structores perform membership and predecessor queries in eral queries and if, as with membership queries, the number worst-case O(loglogN) time, but use randomization (for re- of bits, k, needed to represent each answer is smaller than hashing) to achieve expected O(loglogN) update time. the number of bits b in a memory word, the answers can be Fusion trees, introduced by Fredman and Willard [ 181 packed b/k to a word, for a total of O(Nk/b) words. When use packed B-trees with branching factor O(logn) to store the only queries are.membership queries, updates to the table approximately one out of every (logzn)4 elements of the set can also be performed in constant time. S. The effective universe size is reduced at each node of If the universe size N is significantly less than 2’, where the B-tree by using carefully selected bit positions to ob- b is the number of bits in a word, then packed B-trees [18, tain compressed keys representing the elements stored at 3,7,23] can be time and space efficient. Specifically, using the node. As above. binary search trees are. used to store branching factor B < b/(1 + log,N), insertions, deletions, the roughly equal-sized sets of intermediate elements and and membership and predecessor queries can be performed a total of O(n) space is used. Membership and prede- in O(logn/logB) steps using O(n/B) words. cessor queries take O(logn/loglogn) time. Updates take The most interesting data structures are those that O(logn/loglogn) amortized time. (The fact that this bound work for an arbitrary universe whose elements can fit in is amortized arises from the O((logn)4) time bound to up- a single word of memory and use a number of words that date a node in the B-tree.) This data structore forms the is polynomial in n, or ideally O(n).