
A Framework of Dynamic Data Structures for String Processing∗ Nicola Prezza Technical University of Denmark, DTU Compute, Lyngby, Denmark [email protected] Abstract In this paper we present DYNAMIC, an open-source C++ library implementing dynamic com- pressed data structures for string manipulation. Our framework includes useful tools such as searchable partial sums, succinct/gap-encoded bitvectors, and entropy/run-length compressed strings and FM indexes. We prove close-to-optimal theoretical bounds for the resources used by our structures, and show that our theoretical predictions are empirically tightly verified in practice. To conclude, we turn our attention to applications. We compare the performance of five recently-published compression algorithms implemented using DYNAMIC with those of state- of-the-art tools performing the same task. Our experiments show that algorithms making use of dynamic compressed data structures can be up to three orders of magnitude more space-efficient (albeit slower) than classical ones performing the same tasks. 1998 ACM Subject Classification E.1 Data Structures Keywords and phrases C++, dynamic, compression, data structure, bitvector, string Digital Object Identifier 10.4230/LIPIcs.SEA.2017.11 1 Introduction Dynamism is an extremely useful feature in the field of data structures for string manipulation, and has been the subject of study in many recent works [5, 17, 24, 31, 20, 13]. These results showed that – in theory – it is possible to match information-theoretic upper and lower bounds on space of many problems related to dynamic data structures while still supporting queries in provably optimal time. From the practical point of view however, many of these results are based on complicated structures which prevent them from being competitive in practice. This is due to several factors that in practice play an important role but in theory are often poorly modeled, such as cache locality, branch prediction, disk accesses, context switches, memory fragmentation. Good implementations must take into account all these factors in order to be practical; this is the main reason why little work in this field has been done on the experimental side. An interesting and promising (but still under development) step in this direction is represented by Memoria [22], a C++14 framework providing general purpose dynamic data structures. Other libraries are also still under development (ds-vector [7]) or have been published but the code is not available [5, 17, 2]. Practical works considering weaker dynamic queries have also appeared. In [29] the authors consider rewritable arrays of integers (no indels or partial sums are supported). In [30] practical close-to-succinct dynamic tries are described (in this case, only navigational and child-append operations are supported). To the best of our knowledge, the only working implementation of a dynamic succinct bitvector ∗ Part of this work was done while the author was a PhD student at the University of Udine, Italy. Work supported by the Danish Research Council (DFF-4005-00267). © Nicola Prezza; licensed under Creative Commons License CC-BY 16th International Symposium on Experimental Algorithms (SEA 2017). Editors: Costas S. Iliopoulos, Solon P. Pissis, Simon J. Puglisi, and Rajeev Raman; Article No. 11; pp. 11:1–11:15 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 11:2 A Framework of Dynamic Data Structures for String Processing is [11]. This situation changes dramatically if the requirement of dynamism is dropped. In recent years, several excellent libraries implementing static data structures have been proposed: sdsl [12] (probably the most comprehensive, used, and tested), pizza&chili [25] (compressed indexes), sux [34], succinct [33], libcds [18]. These libraries proved that static succinct data structures can be very practical in addition to being theoretically appealing. In view of this gap between theoretical and practical advances in the field, in this paper we present DYNAMIC: a C++11 library providing practical implementations of some basic succinct and compressed dynamic data structures for string manipulation: searchable partial sums, succinct/gap-encoded bitvectors, and entropy/run-length compressed strings and FM indexes. Our library has been extensively profiled and tested, and offers structures whose performance are provably close to the theoretical lower bounds (in particular, they approach succinctness and logarithmic queries). DYNAMIC is an open-source project and is available at [8]. We conclude by discussing the performance of five recently-published BWT/LZ77 com- pression algorithms [26, 28, 27] implemented with our library. On highly compressible datasets, our algorithms turn out to be up to three orders of magnitude more space-efficient than classical algorithms performing the same tasks. 2 The DYNAMIC library The core of our library is a searchable partial sum with inserts data structure (SPSI in what follows). We start by formally defining the SPSI problem and showing how we solve it in DYNAMIC. We then proceed by describing how we use the SPSI structure as a building block to obtain the dynamic structures implemented in our library. 2.1 The Core: Searchable Partial Sums with Inserts The Searchable Partial Sums With Inserts (SPSI) problem asks for a data structure PS maintaining a sequence s1, . , sm of non-negative integers and supporting the following operations on it: Pi PS.sum(i) = j=1 sj; Pi PS.search(x) is the smallest i such that j=1 sj > x; PS.update(i, δ): update si to si + δ. δ can be negative as long as si + δ ≥ 0; PS.insert(i): insert 0 between si−1 and si (if i = 0, insert in first position). As discussed later, a consequence of the fact that our SPSI does not support delete operations is that also the structures we derive from it do not support delete; we plan to add this feature in our library in the future. DYNAMIC’s SPSI is a B-tree storing integers s1, . , sm in its leaves and subtree size/partial sum counters in internal nodes. SPSI’s operations are implemented by traversing the tree from the root to a target leaf and accessing internal nodes’ counters to obtain the information needed for tree traversal. The choice of employing B-trees is motivated by the fact that a big node fanout translates to smaller tree height (w.r.t. a binary tree) and nodes that can fully fit in a cache line (i.e. higher cache efficiency). We use a leaf size l (i.e. number of integers stored in each leaf) always bounded by 0.5 log m ≤ l ≤ log m and a node fanout f ∈ O(1). f should be chosen according to the cache line size; a bigger value for f reduces cache misses and tree height but increases the asymptotic cost of handling N. Prezza 11:3 single nodes. See Section 2.2 for a discussion on the maximum leaf size and f values used in practice in our implementation. Letting l = c · log m be the size of a particular leaf, we call the coefficient 0.5 ≤ c ≤ 1 the leaf load. In order to improve space usage even further while still guaranteeing very fast operations, integers in the leaves are packed contiguously in word arrays and, inside each leaf L, we assign to each integer the bit-size of the largest integer stored in L. In the next section we prove that this simple blocking strategy leads to a space usage very close to the information-theoretic minimum number of bits needed to store the integers s1, . , sm. It is worth to notice that – as opposed to other works such as [2] – inside each block we use a fixed-length integer encoding. Such an encoding allows much faster queries than variable-length integer codes (such as, e.g., Elias’ delta or gamma) as in our strategy integers are stored explicitly and do not need to be decoded first. Whenever an integer overflows the maximum size associated to its leaf (after an update operation), we re-allocate space for all integers in the leaf. This operation takes O(log m) time, so it does not asymptotically increase the cost of update operations. Crucially, in each leaf we allocate space only for the integers actually stored inside it, and re-allocate space for the whole leaf whenever we insert a new integer or we split the leaf. With this strategy, we do not waste space for half-full leaves1. Note, moreover, that since the size of each leaf is bounded by Θ(log m), re-allocating space for the whole leaf at each insertion does not asymptotically slow down insert operations. 2.1.1 Theoretical Guarantees Let us denote with m/ log m ≤ L ≤ 2m/ log m the total number of leaves, with Lj, 0 ≤ j < L, the j-th leaf of the B-tree (using any leaf order), and with I ∈ Lj an integer belonging to the j-th leaf. The total number of bits stored in the leaves of the tree is X X max_bitsize(Lj) 0≤j<L I∈Lj where max_bitsize(Lj) = maxI∈Lj (bitsize(I)) is the bit-size of the largest I ∈ Lj, and bitsize(x) is the number of bits required to write number x in binary: bitsize(0) = 1 and bitsize(x) = blog2 xc + 1, for x > 0. The above quantity is equal to X cj · log m · max_bitsize(Lj) 0≤j<L where 0.5 ≤ cj ≤ 1 is the j-th leaf load. Since leaves’ loads are always upper-bounded by 1, the above quantity is upper-bounded by X log m max_bitsize(Lj) 0≤j<L which, in turn, is upper-bounded by X X X X log m bitsize I ≤ log m 1 + log2 1 + I . 0≤j<L I∈Lj 0≤j<L I∈Lj In the above inequality, we use the upper-bound bitsize(x) ≤ 1 + log2(1 + x) to deal with the case x = 0.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-