Dynamic Hash Tables Daniel! Sleator Editor

Dynamic Hash Tables Daniel! Sleator Editor

RESEARCHCONTRff3UTlONS Algorithms and Data Structures Dynamic Hash Tables Daniel! Sleator Editor Pedke Larson ABSTRACT: Linear hashing and spiral storage are two An inherent characteristic of hashing techniques is dynamic hashing schemes originally designed for external that a higher load on the table increases the cost of all files. This paper shows how to adapt these two methods for basic operations: insertion, retrieval and deletion. If the hash tables stored in main memo y. The necessay data performance of a hash table is to remain within accept- structures and algorithms are described, the expected able limits when the number of records increases, addi- performance is analyzed mathematically, and actual tional storage must somehow be allocated to -the table. execution times are obtained and compared with alternative The traditional solution is to create a new, la.rger hash techniques. Linear hashing is found to be both faster and table and rehash all the records into the new table. The easier to implement than spiral storage. Two alternative details of how and when this is done can vary. Linear techniques are considered: a simple unbalanced bina y tree hashing and spiral storage allow a smooth growth. As and double hashing with periodic rehashing into a larger the number of records increases, the table grows gradu- table. The retrieval time of linear hashing is similar to ally, one bucket at a time. When a new bucket is added double hashing and substantially faster than a binary tree, to the address space, a limited local reorganization is except for ve y small trees. The loading times of double performed. There is never any total reorganization of hashing (with periodic reorganization), a bina y tree, and the table. linear hashing are similar. Overall, linear hashing is a simple and efficient technique for applications where the 2. LINEAR HASHING cardinality of the key set is not known in advance. Linear hashing was developed by W. Litwin in 1980 [9]. The original scheme is intended for external files. Sev- 1. INTRODUCTION eral improved versions of linear hashing have been pro- Several dynamic hashing schemes for external files posed [5, 7, 13, 141. However, for internal hash tables have been developed over the last few years [2, 4, 9, lo]. their more complicated address calculation is likely to These schemes allow the file size to grow and shrink outweigh their benefits. gracefully according to the number of records actually Consider a hash table consisting of N buckets with stored i.n the file. Any one of the schemes can be used addresses 0, 1, . , N - 1. Linear hashing increases the for internal hash tables as well. However, the two address space gradually by splitting the buckets in a methods best suited for internal tables seem to be lin- predetermined order: first bucket 0, then bucket 1, and ear hashing [9] and spiral storage [lo]: they are easy to so on, up to and including bucket N - 1. Splitting a implement and use little extra storage. This paper bucket involves moving approximately half of the rec- shows how to adapt these two methods to internal hash ords from the bucket to a new bucket at the end of the tables, mathematically analyzes their expected per- table. The splitting process is illustrated in Figure 1 for formance, and reports experimental performance re- an example file with N = 5. A pointer p keeps track of sults. The performance is also compared with that of the next bucket to be split. When all N buckets have more traditional solutions for handling dynamic key been split and the table size has doubled to 2N, the sets. Both methods are found to be efficient techniques pointer is reset to zero and the splitting process starts for applications where the cardinality of the key set is over again. This time the pointer travels from 0 to not known in advance. Of the two, linear hashing is 2N - 1, doubling the table size to 4N. This expansion faster a:nd also easier to implement. process can continue as long as required. Figure 2 illustrates the splitting of bucket 0 for an example table with N = 5. Each entry in the hash table 0 1088 AC:M OOOl-0782/88/0400-0446 $1.50 contains a single pointer, which is the head of a linked 446 Communications of the ACM April 1988 Volume 31 Number 4 Research Contributions list containing all the records hashing to that address. addresses in some interval [0, M]. M should be suffi- When the table is of size 5, all records are hashed by ciently large, say M > 2”. To compute the address of ho(K) = K mod 5. When the table size has doubled to 10, a record we use the following sequence of hashing all records will be addressed by hi(K) = K mod 10. functions: However, instead of doubling the table size immedi- ately, we expand the table one bucket at a time as h,(K) =g(K)mod[N X 2j), j = 0, 1, . required. Consider the keys hashing to bucket 0 under where N is the minimum size of the hash table. If N is ho(K) = K mod 5. To hash to 0 under ho(K) = K mod 5, of the form 2 k, the modulo operation reduces to extract- the last digit of the key must be either 0 or 5. Under the ing the last k + j bits of g(K). The hashing function g(K) hashing function h,(K) = K mod 10, keys with a last can also be implemented in several ways. Functions of digit of 0 still hash to bucket 0, while those with a last the type digit of 5 hash to bucket 5. Note that none of the keys hashing to buckets 1, 2, 3 or 4 under ho can possibly g(K) = (cK)mod M, hash to bucket 5 under h,. Hence, to expand the table, we allocate a new bucket (with address 5) at the end of where c is a constant and M is a large prime have the table, increase the pointer p by one, and scan experimentally been found to perform well [12]. Differ- through the records of bucket 0, relocating to the new ent hashing functions are easily obtained by choosing bucket those hashing to 5 under h,(K) = K mod 10. different values for c and M, thus providing a “tuning” The current address of a record can be found as capability. By choosing c = 1, the classical division- follows. Given a key K, we first compute ho(K). If ho(K) remainder function is obtained. There is also some the- is less than the current value of p, the corresponding oretical justification for using this class of functions. As bucket has already been split, otherwise not. If the shown in [l], it comes within a factor of two of being bucket has been split, the correct address of the record univers&. is given by hi(K). Note that when all the original buck- We must also keep track of the current state of the ets (buckets 0-4) have been split and the table size has hash table. This can be done by two variables: increased to 10, all records are addressed by hl. Returning to the general case, the address computa- L number of times the table size has doubled (from tion can be implemented in several ways. For internal its minimum size N), L 2 0. hash tables the following solution appears to be the P pointer to the next bucket to be split, 0 5 p C simplest. Let g be a normal hashing function producing Nx 2’. I... (a) 1 2 3 4 .5: . , , r” new P % I **.*..1 64 0 1 2 3 4 IO 5 f6: . ...*. t new P ‘12 I . 1. (c) 0 1 2 3 4 5 6 7 8;9: ..*... t new P % I . 1. 03 ; 1 2 3 4 5 6 7 8 9 fl0; . * . new P FIGURE1. Illustration of the Expansion Process of Linear Hashing April 1988 Volume 32 Number 4 Communications of the ACM 447 Research Contributions We fix a lower and an upper bound on the overall load I;0 1 2 3 4 factor and expand (contract) the table whenever the overall load factor goes above (below) the upper (lower) (4 bound. This requires that we keep track of the current number of records in the table, in addition to the state variables L and p. 2.1 Data Structure and Algorithms The basic data structure required is an expanding and contracting pointer array. However, few programming languages directly support dynamically growing arrays. The simplest way to implement such an array is to use a two-level data structure, as illustrated in Figure 3. The array is divided into segments of fixed size. When o;, 2 3 4 the array grows, new segments are allocated as needed. I I I I I I @I When the array shrinks and a segment becomes super- fluous, it can be freed. A directory keeps track of the start address of each segment in use. Directory 1 761 1 El---=! Segment 1 Hash functions: h,(K) = K mod 5 --l h,(K) = K mod 10 Segment 2 I FIGURE2. An Example of Splitting a Bucket Ifi= When the table is expanded by one bucket, these vari- ables are updated as follows: p:=pt-1; if p = N X 2L then begin I, := I, + 1; p := 0; I end; FIGURE3. Data Structure Implementing a DynamiicArray Given a key K, the current address of the corresponding record can be computed simply as The only statically allocated structure is the direc- addr := hL(K); tory.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us