
FishStore: Faster Ingestion with Subset Hashing Dong Xiex∗ Badrish Chandramouliy Yinan Liy Donald Kossmanny yMicrosoft Research xUniversity of Utah [email protected],{badrishc,yinali,donaldk}@microsoft.com ABSTRACT huge increase in data being ingested into the cloud from a The last decade has witnessed a huge increase in data being variety of data sources. The ingested data takes various forms ingested into the cloud, in forms such as JSON, CSV, and ranging from JSON (a popular flexible nested data format binary formats. Traditionally, data is either ingested into with high expressive power) to relational-style data in CSV storage in raw form, indexed ad-hoc using range indices, (comma-separated values) format, and binary formats such or cooked into analytics-friendly columnar formats. None as Google Protocol Buffers [13] and Apache Thrift [10]. of these solutions is able to handle modern requirements Given the huge ingested data volume, the goal for inges- on storage: making the data available immediately for ad- tion has traditionally been to ingest data as fast as possible, hoc and streaming queries while ingesting at extremely high saturating storage bandwidth and incurring minimal CPU throughputs. This paper builds on recent advances in parsing overhead. These goals usually result in simply dumping raw and indexing techniques to propose FishStore, a concurrent data on storage. More recently, however, there is an increas- latch-free storage layer for data with flexible schema, based ing need [17, 33] to make the ingested data available “imme- on multi-chain hash indexing of dynamically registered pred- diately” for an ever-increasing range of analytic queries: icated subsets of data. We find predicated subset hashing tobe • Ad-hoc analysis queries that scan data over time ranges a powerful primitive that supports a broad range of queries (e.g., last hour of data). The scan may (1) include complex on ingested data and admits a high-performance concurrent predicates over possibly nested fields; (2) involve custom implementation. Our detailed evaluation on real datasets and logic to select a varying (but usually small) number of queries shows that FishStore can handle a wide range of records; and (3) access a small number of fields. workloads and can ingest and retrieve data at an order of • Recurring queries that have identical predicates, but are magnitude lower cost than state-of-the-art alternatives. repeated over different time ranges (e.g., execute a report over the last hour of data, repeated every hour). ACM Reference Format: Dong Xie, Badrish Chandramouli, Yinan Li, and Donald Kossmann. • Point lookup queries that are based on various keys, e.g., 2019. FishStore: Faster Ingestion with Subset Hashing. In 2019 join keys in case of streaming joins, that lookup the data, International Conference on Management of Data (SIGMOD ’19), often over a recent window. June 30-July 5, 2019, Amsterdam, Netherlands. ACM, New York, NY, • Streaming queries that are fed parts of the ingested data sat- USA, 18 pages. https://doi.org/10.1145/3299869.3319896 isfying custom predicates and based on the query schema. 1 INTRODUCTION 1.1 Today’s Solutions Over the last few years, driven by the increasing importance The traditional solution is to ingest data in raw form and of the cloud-edge architecture, we have been witnessing a then make the data available for offline queries using peri- ∗Work performed during internship at Microsoft Research. odic batch jobs that load data into a warehouse, e.g., in an optimized format such as Parquet [9]. This process is highly Permission to make digital or hard copies of all or part of this work for CPU intensive and slow, incurs high latency before the data personal or classroom use is granted without fee provided that copies are not is available for ad-hoc or repeated queries, and does not help made or distributed for profit or commercial advantage and that copies bear with point lookups or streaming queries, making it unsuit- this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with able for our target applications. Alternatively, we can fully credit is permitted. To copy otherwise, or republish, to post on servers or to parse records and either load them into a database or update redistribute to lists, requires prior specific permission and/or a fee. Request a secondary range index over every (nested) attribute and permissions from [email protected]. prefix during ingestion. However, full parsing, database load- SIGMOD ’19, June 30-July 5, 2019, Amsterdam, Netherlands ing, and full secondary index creation are slow. For example, © 2019 Association for Computing Machinery. we found that a typical JSON parser can only do full parsing ACM ISBN 978-1-4503-5643-5/19/06...$15.00 https://doi.org/10.1145/3299869.3319896 at a speed of around 100MB/sec per CPU core [36]. 1.2 New Trends in Parsing and Indexing lower than 15% and memory usage greater than 75%. Records Recently, raw parsers such as Mison [36], Sparser [42], and matching this condition are now indexed and available for FAD.js [21] have transformed the parsing landscape by achiev- subsequent analysis. As another example, they may wish to ing speeds of more than 2GB/sec per core. They run on a index (or group) the data by machine name using PSF f2, which single thread and exploit batching, SIMD parallelism, and the allows drilling down into a particular machine’s logs. targeted parsing of a few fields to achieve high throughput. Sec. 2 describes PSFs in detail and provides more exam- However, we find that simply plugging in a fast parser into to- ples of its use in our target applications involving ad-hoc, day’s solutions does not help with ingestion because we have recurring, and streaming analysis. to parse all fields. A modified approach, where only afew fields are indexed, can relieve the parsing bottleneck, butdoes 1.4 FishStore Components not improve ingestion because the bottleneck shifts to the We overview the FishStore system and its challenges in heavy range indices such as RocksDB [48] and Bw-Tree [35] Sec. 4. Briefly, it consists of two major components: (1)in- used in practice, which incur heavy write amplification [44], gestion and indexing; and (2) subset retrieval. random I/Os, and CPU overheads. Ingestion & Indexing. FishStore ingests data concurrently Persistent key-value stores such as FASTER [24] have re- into an immutable log (in ingestion order) and maintains a cently been shown to offer unprecedented performance at hash index. For every active PSF f and non-null value v 2 D, very low CPU cost – more than 150 millions ops/sec on a mod- we create a hash entry ¹f ;vº that links all matching log ern CPU. FASTER consists of a lightweight cache-optimized records for that entry in a hash chain. Based on the regis- concurrent hash index backed by a record-oriented hybrid tered PSFs, the desired fields are provided to the parser for log. The log is ordered by data arrival and incurs no write each data batch. FishStore evaluates the active PSFs and amplification. A large portion of the log tail is retained in creates or updates hash chains. Unlike hash key-value stores, an in-memory circular buffer. While promising, such indices a record may be part of more than one hash chain, with a are designed to serve point lookups, inserts, and updates, variable length record header of pointers to fields and other and as such are insufficient for our target applications. records. Therefore, we develop a new hash index with latch- free PSF registration (Sec. 5) and latch-free data ingestion 1.3 Introducing FishStore with multiple hash chains (Sec. 6). In this paper, we advocate a different approach. We intro- Subset Retrieval. FishStore supports scans for records duce a new storage layer for flexible-schema data, called matching PSF values ¹f ;vº over a part of the ingested log, and FishStore1, that combines fast parsing with a hash-based pri- returns the requested fields for matching records. FishStore mary subset index. First, FishStore takes as input a generic does not build new indices on older data; therefore, the hash data parser that exposes the ability to efficiently parse a batch chain for a PSF may not cover the entire log. Hence, Fish- of records and extract a given set of fields from each record Store performs an adaptive scan that combines full scans in the batch. Second, FishStore allows applications to dy- and index lookups. Interestingly, even within the indexed namically register (and deregister) predicated subset functions portion of the log, based on selectivity, it may sometimes be (PSFs) over the data. Briefly, PSFs allow applications to iden- preferable to perform a full scan [34]. FishStore performs tify and efficiently retrieve different subsets of records, and an adaptive mix of index traversals and full scans to get the work as follows. Users provide a function f : R ! D that highest performance (Sec. 7). maps each record r 2 R to a value d in domain D, based on a To recap, FishStore combines fast parsing with light- given set of fields of interest for the PSF. FishStore allows weight dynamic hash indexing to provide an extremely fast users to retrieve all records satisfying a given PSF and value. and general-purpose storage layer for analytics. PSF registra- This paper shows that PSF-based indexing is powerful yet tion is similar in concept to dynamically attaching debuggers admits an efficient and scalable implementation. For example, to the data. Ingestion performance depends on the number it can support point lookups, equi-joins, selection predicates, of active PSFs and fields of interest.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-