
Deterministic and efficient minimal perfect hashing schemes Leandro M. Zatesko 1 Jair Donadelli 2 Abstract: This paper presents deterministic versions to the hashing schemes of Botelho, Kohayakawa and Ziviani (2005) and Botelho, Pagh and Ziviani (2007), also proves a statement left as open problem in the former work, related to the correct- ness proof and to the complexity analysis of their scheme. Our deterministic variants have been implemented and executed over datasets with up to 25,000,000 keys and have brought equivalent performance results between the deterministic and the origi- nal randomized algorithms. Resumo: Neste trabalho apresentamos versões determinísticas para os esquemas de hashing de Botelho, Kohayakawa e Ziviani (2005) e de Botelho, Pagh e Ziviani (2007). Também respondemos a um problema deixado em aberto no primeiro dos trabalhos, relacionado à prova da corretude e à análise de complexidade do esquema por eles proposto. As versões determinísticas desenvolvidas foram implementadas e testadas sobre conjuntos de dados com até 25.000.000 de chaves, e os resultados verificados se mostraram equivalentes aos dos algoritmos aleatorizados originais. 1 Introduction A minimal perfect hashing scheme, as defined in [1, 2, 3], is an algorithm that, given a set S with n keys from an universe U, constructs a hash function h: U →{0,...,n−1} which maps without collision S to {0,...,n − 1}. We are interested only in hashing schemes whose outputs are hash functions with O(1) lookup time. For our purposes, every key x is assumed to be a chain of at most L symbols taken from a finite alphabet Σ, for a fixed constant L. As an example, keys can be URLs of length at most L which we are trying to map to memory addresses, and the alphabet would then contain decimal digits, Latin letters and some special characters like / and ?. Hashing is a widely studied topic in Computer Science. Mapping n objects bijectively to hash table addresses {0,...,n − 1} is a very often problem, and minimal perfect hash functions, in particular, are useful in situations related to “efficient storage and fast retrieval of items from static sets, such as words in natural languages, reserved words in programming 1Departamento de Informática, UFPR, Brazil, PO Box 19081 [email protected] 2Centro de Matemática, Computação e Cognição, UFABC, Brazil, CEP 09210-170 [email protected] Deterministic and efficient minimal perfect hashing schemes languages or interactive systems, universal resource locations (URLs) in Web search engines, or item sets in data mining techniques” [1]. Derandomization is an important subject of Computational Complexity and a way to understand whether randomness in algorithms is necessary. Formally, a central problem about complexity of randomized algorithms is “P = BPP?”. For example, the problem of polyno- mial identity testing is in BPP, which means that it is solvable by a polynomial-time Monte Carlo algorithm [4], an algorithm whose answer can be wrong with bounded probability. De- randomizing polynomial identity tests has deep consequences in Computational Complexity [5]. On the other hand, the celebrated polynomial-time deterministic algorithm for primality testing [6] is a successful derandomization of a Monte Carlo polynomial-time algorithm [7]. Differently, the hashing schemes we study are known as Las Vegas algorithms [4], which means that their answer is always right, but the time complexity is a random variable. Prob- lems solvable by Las Vegas algorithms with expected polynomial-time complexity form the class ZPP ⊆ BPP. We shall present derandomized versions to the hashing schemes of Botelho, Ko- hayakawa and Ziviani (2005) and of Botelho, Pagh and Ziviani (2007), from now on referred as BMZ and BDZ, respectively. These schemes are Las Vegas algorithms that, given a set with n keys, construct in expected time O(n) a hash function which in O(1)-evaluation time maps without collision the keys to the set {0,...,n − 1}. The problem of constructing a minimal perfect O(1)-evaluation time hash function given a set with n keys is, of course, in P [8], therefore our work just shows that these very practical algorithms didn’t need to be randomized to achieve a good average performance. Actually we derandomize the schemes in a very simple manner, and the resulting algorithms are schemes of O(n) average-case time complexity. Additionally, we also give a proof for a question left open in [1] (Equation 2 below), closing the complexity analysis and the correctness proof of the BMZ scheme. In what follows, unless stated otherwise, we use the term graph to refer to a simple graph, that is an unweighted, undirected graph containing no loops or multiple edges. The term critical subgraph of a graphrefers to the maximal subgraphwith minimumdegree δ > 2. The term parent of a vertex in a graph search is used according to the traditional meaning, as could be found in [9], in any graph search the term tree edge refers to an edge in which one endpoint is the parent of the other, and the term back edge refers to an edge which is not a tree edge. Also, we write a(n) ≈ b if a(n) → b as n → ∞. This paper is organized as follows. In Section 2 we shall give a brief review on related works, in Section 3 we shall present deterministic versions of the schemes BMZ and BDZ, in Section 4 we shall prove a graph theoretical result related to the complexity analysis and to the correctness proof of BMZ scheme, in Section 5 we shall give performance comparisons between the randomized and derandomized schemes and finally in Section 6 we shall close with some considerations about hashing and our results. RITA • Volume 20 • Número 2 • 2013 57 Deterministic and efficient minimal perfect hashing schemes 2 Related works It is known that finding a perfect hash function for sets with n keys cannot be done in o(n) time [3], although many O(n)-time perfect hashing schemes are known from litera- ture [2, 10, 11, 12]. For example, the randomized hashing scheme presented in [3], which maps the n keys to the edges of an acyclic graph on 2.09n vertices and then uses a depth-first search to label the edges with the values 1,...,n, constructs a minimal perfect hash function in O(n) expected time. The constructed hash function requires O(nlogn) bits to be stored, and this amount of space is proportional to the size of the graph. This important hashing scheme inspired the BMZ scheme [1], which, allowing the graph to be cyclic, reduces the number of vertices to 1.15n. In 1984 Fredman and Komlós [13] proved that nlge + lglgu + O(logn) is a lower bound for the space of O(1)-evaluation time hash functions built by a minimal perfect scheme on an universe with u objects. Remark that this means about lge =∼ 1.443 bit per key. In ad- dition, Melhorn [14] presented in 1984 a theoretical scheme with which proved the lower bound to be tight. His scheme however was an exponential-time algorithm. Both schemes of [3] and [1], although perfect, minimal, practical and efficient in time, construct hash func- tions represented by an undesirable amount of space, if we take into account that it is possi- ble to have minimal perfect hashing schemes whose output hash functions require only O(n) bits [14]. Even the space of the latter being smaller than that of the former, it does not escape from the asymptotic O(nlogn). The practical, minimal, perfect and O(n)-expected time BDZ scheme [2], presented in 2007, not only achieves the O(n) space to the representation of the constructed hash function but also gets this amount to be 2.62n, just a little greater than 1.443n. More recently, better practical hashing schemes were proposed [15, 16]. The one by Belazzougui, Botelho and Di- etzfelbinger [15], known as CHD, generates in O(n) time a minimal perfect O(1)-evaluation time hash function which requires just about 2.06 bits per key. The authors’ experiment show that CHD is more efficient than other schemes concerning to running and evaluation time too. Nevertheless, one can still set CHD parameters to obtain better results according to each application. For example, if one does not need the hash function to be minimal, CHD can map without collision the n keys to addresses 1,...,m, where m = 1.23n, in a way that the generated hash function requires about 1.4 bits per key. Evenmore, if one sets m = 2n, one gets 0.67 bits per key. CHD also shows up very efficient for k-perfect hashing, where at most k keys can be mapped to the same address. BMZ, BDZ, CHD and other hashing schemes were implemented in CMPH (C Minimal Perfect Hashing) library, developed and maintained at SourceForge.net by the authors themselves. Below, for sake of completeness, we give a short review of the BMZ and BDZ schemes, though we strongly recommend [1, 2] for more details. 58 RITA • Volume 20 • Número 2 • 2013 Deterministic and efficient minimal perfect hashing schemes 2.1 BMZ This is a hashing scheme proposed in [1] which constructs in O(n) expected time a minimal perfect hash function given a set S with n keys. It maps S to the set E(G) of the n edges of a graph G on 1.15n vertices and then tries to find a way to assign labels to the vertices so that the edges labels, defined to be the sum of endpoints labels, will be the whole set {0,...,n − 1}. Two properties about G are required: Property P1: The critical subgraph of G, denoted by Gcrit, must be connected.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-