Efficient Query Processing with Optimistically Compressed Hash

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Query Processing with Optimistically Compressed Hash Efficient Query Processing with Optimistically Compressed Hash Tables & Strings in the USSR Tim Gubner Viktor Leis Peter Boncz CWI Friedrich Schiller University Jena CWI [email protected] [email protected] [email protected] Abstract—Modern query engines rely heavily on hash tables Hash Table Optimistically Compressed Hash Table for query processing. Overall query performance and memory Hot Area Cold Area footprint is often determined by how hash tables and the tuples within them are represented. In this work, we propose three complementary techniques to improve this representation: Speculate & Domain-Guided Prefix Suppression bit-packs keys and values Compress tightly to reduce hash table record width. Optimistic Splitting decomposes values (and operations on them) into (operations on) frequently-accessed and infrequently-accessed value slices. By removing the infrequently-accessed value slices from the hash 24 byte 8 byte 8 byte table record, it improves cache locality. The Unique Strings Self- Fig. 1: Optimistically Compressed Hash Table, which is split aligned Region (USSR) accelerates handling frequently-occurring strings, which are very common in real-world data sets, by into a thin hot area and a cold area for exceptions creating an on-the-fly dictionary of the most frequent strings. This allows executing many string operations with integer logic While heavyweight compression schemes tend to result in and reduces memory pressure. larger space savings, they also have a high CPU overhead, We integrated these techniques into Vectorwise. On the TPC-H benchmark, our approach reduces peak memory consumption by which often cannot be amortized by improved cache locality. 2–4× and improves performance by up to 1.5×. On a real-world Therefore, we propose a lightweight compression technique BI workload, we measured a 2× improvement in performance called Domain-Guided Prefix Suppression. It saves space by and in micro-benchmarks we observed speedups of up to 25×. using domain information to re-pack multiple columns in- flight in the query pipeline into much fewer machine words. I. INTRODUCTION For each attribute, this requires only a handful of simple In modern query engines, many important operators like bit-wise operations, which are easily expressible in SIMD join and group-by are based on in-memory hash tables. Hash instructions, resulting in extremely low per-tuple cost (sub- joins, for example, are usually implemented by materializing cycle) for packing and subsequent unpacking operations. the whole inner (build) relation into a hash table. Hash tables Rather than saving space, the second technique, called Opti- are therefore often large and determine the peak memory mistic Splitting, aims at improving cache locality. As Figure 1 consumption of a query. Since hash table sizes often exceed the illustrates, it splits the hash table into a hot (frequently- capacity of the CPU cache, memory latency or bandwidth be- accessed) and a cold (infrequently-accessed) area containing come the performance bottleneck in query processing. Due to exceptions. These exceptions may, for example, be overflow the complex cache-hierarchy of modern CPUs, the access time bits in aggregations, or pointers to string data. By separating to a random tuple varies by orders of magnitude depending on hot from cold information, Optimistic Splitting improves cache the size of the working set. This means that shrinking hash utilization even further. tables does not only reduce memory consumption but also has An often ignored, yet costly part of query processing is the potential of improving query performance through better string handling. String data is very common in many real- CPU cache utilization [3], [4], [26]. world applications [22], [31]. In comparison with integers, To decrease a hash table’s hunger for memory and, con- strings occupy more space, are much slower to process, and are sequently, increase cache-efficiency, one can combine two less amenable to acceleration using SIMD. To speed up string orthogonal approaches: to increase the fill factor and to reduce processing, we therefore propose the Unique Strings Self- the bucket/row size. Several hash table designs like Robin aligned Region (USSR), an efficient dynamic string dictionary Hood Hashing [8], Cuckoo Hashing [23], and the Concise for frequently-occurring strings. In contrast to conventional Hash Table [4] have been proposed for achieving high fill per-column dictionaries used in storage, the USSR is created factors while still providing good lookup performance. Here, anew for each query by inserting frequently-occurring strings we investigate how the size of each row can be reduced—a at query runtime. The USSR, which has a fixed size of topic that, despite its obvious importance for query processing, 768 kB (ensuring its cache residency), speeds up common has not received as much attention. string operations like hashing and equality checks. 1 Preprocessing on meta data we describe Domain-Guided Prefix Suppression in detail using Per-block Total Normalized #Bits min/max min/max domain required Figure 2 as an illustration. A B A B A B A B A. Domain Derivation [-4, 42] [3, 23] [-4, 42] [3, 1k] [0, 46] [0, 997]6 10 [1, 23] [3, 1k] A column in-flight in a query plan can originate directly Bitwise layout [23, 90] [0, 3k] X A B from a table scan or from a computation. If a value origi- ... ... Mask 0xF...F 0xF...F nates from a table scan, we determine its domain based on InShift 0 0 the scanned blocks. For each block we utilize per-column OutShift 0 6 minimum and maximum information (called ZoneMaps or Min/Max indices). This information is typically not stored Packing during query execution inside the block itself as this would require scanning the Data Normalized Packed u = (A[i] >> 0) block (potentially fetching it from disk) before this informa- & 0xF...F X A B A B tion can be extracted. Instead, the meta-data is stored “out- 42 3 46 0 46 v = (B[i] >> 0) of-band” (e.g. in a row-group header, file footer or inside -4 23 0 20 & 0xF...F 1280 the catalog). By knowing the range of blocks that will be 1 1k 5 997 C[i] = (u << 0) 63813 23 3 27 0 | (v << 6) 27 scanned, the domain can be calculated by computing the total minimum/maximum over the range. 32-bit 64-bit 32-bit On the other hand, if a value stems from a computa- Fig. 2: Domain-Guided Prefix Suppression tion, the domain minimum and maximum can be derived bottom up according to the functions used, based on the While each of the proposed techniques may appear simple minimum/maximum bounds on its inputs under assumption in isolation, they are highly complementary. For example, of the worst case. Consider, for example, the addition of using all three techniques, strings from a low-cardinality two integers a 2 [amin; amax] and b 2 [bmin; bmax] resulting domain can be represented using only a small number of in r 2 [rmin; rmax]. To calculate rmin and rmax we have bits. Furthermore, our techniques remap operations on wide to assume the worst-case that means the smallest (rmin), columns into operations on multiple thin columns using a few respective highest (rmax), result of the addition. In case of extra primitive functions (such as pack and unpack). As such, an addition this boils down to rmin = amin + bmin and they can easily be integrated into existing systems and do rmax = amax + bmax. not require extensive modifications of the query processing Depending on these domain bounds, an addition of two 32- algorithms or even hash table implementations. bit integer expressions could still fit in a 32-bit result, or less Rather than merely implementing our approach as a proto- likely, would have to be extended to 64-bit. This analysis of type, we fully integrated it into Vectorwise which originated minimum/maximum bounds often can allow implementations from the MonetDB/X100 project [7]. Describing how to to ignore overflow handling, as the choice of data types integrate the three techniques into industrial-strength database prevents overflow, rather than having to check for it. For systems is a key contribution of the paper. aggregation functions such as SUM, overflow avoidance is Based on Vectorwise, we performed an extensive experi- more challenging but in Section III-A, we discuss Optimistic mental evaluation using TPC-H, micro-benchmarks, and real- Splitting, which allows to do most calculations on small data world workloads. On TPC-H, we reduce peak memory con- types, also reducing the cache footprint of aggregates. sumption by 2–4× and improve performance by up to 1.5×. B. Prefix Suppression On a string-heavy real-world BI workload, we measured a Using the derived domain bounds, we can represent values 2.2× improvement in performance, and in micro-benchmarks compactly without losing any information by dropping the we even observed speedups of up to 25×. common prefix bits. To further reduce the number of bits and enable the compression of negative values, we first subtract II. DOMAIN-GUIDED PREFIX SUPPRESSION the domain minimum from each value. Consequently, each Domain-Guided Prefix Suppression reduces memory con- bit-packed value is a positive offset to the domain minimum. sumption by eliminating the unnecessary prefix bits of each We also pack multiple columns together such that the packed attribute. This enables us to cheaply compress rows without result fits a machine word. This is done by concatenating all affecting the implementation of the hash table itself, which compressed bit-strings, and (if necessary) chunk the result into makes it easy to integrate our technique into existing database multiple machine words. Each chunk of the result constitutes systems. In particular, while our system (Vectorwise) uses a compressed column which can be stored just like a regular a single-table hash join, Domain-Guided Prefix Suppression uncompressed column.
Recommended publications
  • LDBC Introduction and Status Update
    8th Technical User Community meeting - LDBC Peter Boncz [email protected] ldbcouncil.org LDBC Organization (non-profit) “sponsors” + non-profit members (FORTH, STI2) & personal members + Task Forces, volunteers developing benchmarks + TUC: Technical User Community (8 workshops, ~40 graph and RDF user case studies, 18 vendor presentations) What does a benchmark consist of? • Four main elements: – data & schema: defines the structure of the data – workloads: defines the set of operations to perform – performance metrics: used to measure (quantitatively) the performance of the systems – execution & auditing rules: defined to assure that the results from different executions of the benchmark are valid and comparable • Software as Open Source (GitHub) – data generator, query drivers, validation tools, ... Audience • For developers facing graph processing tasks – recognizable scenario to compare merits of different products and technologies • For vendors of graph database technology – checklist of features and performance characteristics • For researchers, both industrial and academic – challenges in multiple choke-point areas such as graph query optimization and (distributed) graph analysis SPB scope • The scenario involves a media/ publisher organization that maintains semantic metadata about its Journalistic assets (articles, photos, videos, papers, books, etc), also called Creative Works • The Semantic Publishing Benchmark simulates: – Consumption of RDF metadata (Creative Works) – Updates of RDF metadata, related to Annotations • Aims to be
    [Show full text]
  • Stisummit-Boncz-Final Fixed.Pptx
    DB vs RDF: structure vs correlation +comments on data integration & SW Peter Boncz Senior Research Scientist @ CWI Lecturer @ Vrije Universiteit Amsterdam Architect & Co-founder MonetDB Architect & Co-founder VectorWise STI Summit, July 6, 2011 Riga. DB vs RDF: structure vs correlation 1 Correlations Real data is highly correlated } (gender, location) ó firstname } (profession,age) ó income But… database systems assume attribute independence } wrong assumption leads to suboptimal query plan STI Summit, July 6, 2011 Riga. DB vs RDF: structure vs correlation 2 Example: co-authorship query DBLP “find authors who published in VLDB, SIGMOD and Bioinformatics” SELECT author FROM publications p1, p2, p3 WHERE p1.venue = VLDB and p2.venue = SIGMOD and p3.venue = Bioinformatics and p1.author = =p2.author p2.author and and p1.author = =p3.author p3.author and and p2.author = =p3.author p3.author STI Summit, July 6, 2011 Riga. DB vs RDF: structure vs correlation 3 Correlations: RDBMS vs RDF Schematic structure in RDF data hidden in correlations, which are everywhere SPARQL leads to plans with } many self-joins } whose hit-ratio is correlated (with eg selections) Relational query optimizers do not have a clue è all self-joins look the same to it è random join order, bad query plans L STI Summit, July 6, 2011 Riga. DB vs RDF: structure vs correlation 4 RDF Engines to the Next Level Challenge: solving the correlation problem. Ideas? } Interleave optimization and execution } Run-time sampling to detect true selectivities } Run-time query (re-)planning } Tackling when there are long join chains } Creating partial path indexes } Graph “cracking”: index build as side-effect of query } .
    [Show full text]
  • Designing Engines for Data Analysis Peter Boncz
    Designing Engines For Data Analysis Peter Boncz Inaugural Lecture for the Special Chair “Large-Scale Analytical Data Management” Vrije Universiteit Amsterdam 17 October 2014 15:45-16:30 The age of Big Data An internet minute Designing Engines for Data Analysis - Inaugural Lecture - 17/10/2014 1 ` The age of Big Data. As the world turns increasingly digital, and all individuals and organizations interact more and more via the internet, for instance in social networks such as Facebook and Twitter, as they look for information online with Google, and watch movies or TV on YouTube and Netflix, the amount of information flying around is huge. Slide 1 depicts what happens on the internet during one minute [1]. The numbers are staggering, and are expected to keep growing. For instance, the amount of information being sent corresponds to a thousand full large disk drives, per minute. This is a hard disk drive stack of 20 meters high. All stored digital information together is estimated currently at 3 billion full disk drives, that is 4 zettabytes (when you start with a megabyte, a 1000 times that is a gigabyte, by multiplying again you get a petabyte, and then a zettabyte) Big Data. All this data is being gathered, kept and analyzed. Not only by the NSA ;-), it is spread over many organizations. The question for all of them, including the NSA, is: how to make use of it? “Big Data” Designing Engines for Data Analysis - Inaugural Lecture - 17/10/2014 2 This problem is called Big Data, and is characterized by the “three V-s”: The data comes in high Volume, The data comes at high Velocity, The data has high Variety, it consists of web pages, form, but also images, or any kind of text messages such as an email, Tweet or Facebook post.
    [Show full text]
  • Dynamic Resource Management in Vectorwise on Hadoop
    Vrije Universiteit, Amsterdam Faculty of Sciences, Computer Science Department Cristian Mihai Bârcă, student no. 2514228 Dynamic Resource Management in Vectorwise on Hadoop Master Thesis in Parallel and Distributed Computer Systems Supervisor / First reader: Prof. Dr. Peter Boncz, Vrije Universiteit, Centrum Wiskunde & Informatica Second reader: Prof. Dr. Henri Bal, Vrije Universiteit Amsterdam, August 2014 Contents 1 Introduction 1 1.1 Background and Motivation..............................1 1.2 Related Work......................................2 1.2.1 Hadoop-enabled MPP Databases.......................4 1.2.2 Resource Management in MPP Databases..................7 1.2.3 Resource Management in Hadoop Map-Reduce...............8 1.2.4 YARN Resource Manager (Hadoop NextGen)................9 1.3 Vectorwise........................................ 11 1.3.1 Ingres-Vectorwise Architecture........................ 12 1.3.2 Distributed Vectorwise Architecture..................... 13 1.3.3 Hadoop-enabled Architecture......................... 16 1.4 Research Questions................................... 19 1.5 Goals.......................................... 20 1.6 Basic ideas....................................... 20 2 Approach 21 2.1 HDFS Local Reads: DataTransferProtocol vs Direct I/O.............. 22 2.2 Towards Data Locality................................. 24 2.3 Dynamic Resource Management........................... 26 2.4 YARN Integration................................... 28 2.5 Experimentation Platform............................... 31 3
    [Show full text]
  • VU Research Portal
    VU Research Portal Vectorwise: Beyond Column Stores Boncz, P.A.; Zukowski, M. published in IEEE Data Engineering Bulletin 2012 document version Peer reviewed version Link to publication in VU Research Portal citation for published version (APA) Boncz, P. A., & Zukowski, M. (2012). Vectorwise: Beyond Column Stores. IEEE Data Engineering Bulletin, 35(1), 21-27. http://oai.cwi.nl/oai/asset/19953/19953B.pdf General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 29. Sep. 2021 This is a postprint of Vectorwise: Beyond Column Stores Boncz, P.A., Zukowski, M. IEEE Data Engineering Bulletin, 35(1), 21-27 Published version: no link available Link VU-DARE: http://hdl.handle.net/1871/39785 (Article begins on next page) Vectorwise: Beyond Column Stores Marcin Zukowski, Actian, Amsterdam, The Netherlands Peter Boncz, CWI, Amsterdam, The Netherlands Abstract This paper tells the story of Vectorwise, a high-performance analytical database system, from multiple perspectives: its history from academic project to commercial product, the evolution of its technical architecture, customer reactions to the product and its future research and development roadmap.
    [Show full text]
  • Everything You Always Wanted to Know About Compiled and Vectorized Queries but Were Afraid to Ask
    Everything You Always Wanted to Know About Compiled and Vectorized Queries But Were Afraid to Ask Timo Kersten Viktor Leis Alfons Kemper Thomas Neumann Andrew Pavlo Peter Boncz? Technische Universitat¨ Munchen¨ Carnegie Mellon University Centrum Wiskunde & Informatica? fkersten,leis,kemper,[email protected] [email protected] [email protected] ABSTRACT Like the Volcano-style iteration model, vectorization uses pull- The query engines of most modern database systems are either based iteration where each operator has a next method that produces based on vectorization or data-centric code generation. These two result tuples. However, each next call fetches a block of tuples state-of-the-art query processing paradigms are fundamentally dif- instead of just one tuple, which amortizes the iterator call over- ferent in terms of system structure and query execution code. Both head. The actual query processing work is performed by primitives paradigms were used to build fast systems. However, until today it that execute a simple operation on one or more type-specialized is not clear which paradigm yields faster query execution, as many columns (e.g., compute hashes for a vector of integers). Together, implementation-specific choices obstruct a direct comparison of ar- amortization and type specialization eliminate most of the overhead chitectures. In this paper, we experimentally compare the two mod- of traditional engines. els by implementing both within the same test system. This allows In data-centric code generation, each relational operator imple- us to use for both models the same query processing algorithms, the ments a push-based interface (produce and consume).
    [Show full text]
  • Vectorwise: Beyond Column Stores
    Vectorwise: Beyond Column Stores Marcin Zukowski, Actian, Amsterdam, The Netherlands Peter Boncz, CWI, Amsterdam, The Netherlands Abstract This paper tells the story of Vectorwise, a high-performance analytical database system, from multiple perspectives: its history from academic project to commercial product, the evolution of its technical architecture, customer reactions to the product and its future research and development roadmap. One take-away from this story is that the novelty in Vectorwise is much more than just column-storage: it boasts many query processing innovations in its vectorized execution model, and an adaptive mixed row/column data storage model with indexing support tailored to analytical workloads. Another one is that there is a long road from research prototype to commercial product, though database research continues to achieve a strong innovative influence on product development. 1 Introduction The history of Vectorwise goes back to 2003 when a group of researchers from CWI in Amsterdam, known for the MonetDB project [5], invented a new query processing model. This vectorized query processing approach became the foundation of the X100 project [6]. In the following years, the project served as a platform for further improvements in query processing [23, 26] and storage [24, 25]. Initial results of the project showed impressive performance improvements both in decision support workloads [6] as well as in other application areas like information retrieval [7]. Since the commercial potential of the X100 technology was apparent, CWI spun-out this project and founded Vectorwise BV as a company in 2008. Vectorwise BV decided to combine the X100 processing and storage components with the mature higher-layer database components and APIs of the Ingres DBMS; a product of Actian Corp.
    [Show full text]
  • VU Research Portal
    VU Research Portal Designing engines for data analysis Boncz, P.A. 2014 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Boncz, P. A. (2014). Designing engines for data analysis. Vrije Universiteit Amsterdam. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 01. Oct. 2021 Designing Engines For Data Analysis Peter Boncz Inaugural Lecture for the Special Chair “Large-Scale Analytical Data Management” Vrije Universiteit Amsterdam 17 October 2014 15:45-16:30 The age of Big Data An internet minute Designing Engines for Data Analysis - Inaugural Lecture - 17/10/2014 1 ` The age of Big Data. As the world turns increasingly digital, and all individuals and organizations interact more and more via the internet, for instance in social networks such as Facebook and Twitter, as they look for information online with Google, and watch movies or TV on YouTube and Netflix, the amount of information flying around is huge.
    [Show full text]