Learned Cardinalities:Estimating Correlated Joins with Deep Learning

Learned Cardinalities:Estimating Correlated Joins with Deep Learning

Learned Cardinalities: Estimating Correlated Joins with Deep Learning Andreas Kipf Thomas Kipf Bernhard Radke Technical University of Munich University of Amsterdam Technical University of Munich [email protected] [email protected] [email protected] Viktor Leis Peter Boncz Alfons Kemper Technical University of Munich Centrum Wiskunde & Informatica Technical University of Munich [email protected] [email protected] [email protected] ABSTRACT We argue that machine learning is a highly promising technique We describe a new deep learning approach to cardinality estimation. for solving the cardinality estimation problem. Estimation can be MSCN is a multi-set convolutional network, tailored to representing formulated as a supervised learning problem, with the input being relational query plans, that employs set semantics to capture query query features and the output being the estimated cardinality. In features and true cardinalities. MSCN builds on sampling-based contrast to other problems where machine learning has been pro- estimation, addressing its weaknesses when no sampled tuples posed like index structures [12] and join ordering [23], the current qualify a predicate, and in capturing join-crossing correlations. Our techniques based on basic per-table statistics are not very good. evaluation of MSCN using a real-world dataset shows that deep In other words, an estimator based on machine learning does not learning significantly enhances the quality of cardinality estimation, have to be perfect, it just needs to be better than the current, inac- which is the core problem in query optimization. curate baseline. Furthermore, the estimates produced by a machine learning model can directly be leveraged by existing, sophisticated 1 INTRODUCTION enumeration algorithms and cost models without requiring any Query optimization is fundamentally based on cardinality estima- other changes to the database system. tion. To be able to choose between different plan alternatives, the In this paper, we propose a deep learning-based approach that query optimizer must have reasonably good estimates for inter- learns to predict (join-crossing) correlations in the data and ad- mediate result sizes. It is well known, however, that the estimates dresses the aforementioned weak spot of sampling-based tech- produced by all widely-used database systems are routinely wrong niques. Our approach is based on a specialized deep learning model by orders of magnitude—causing slow queries and unpredictable called multi-set convolutional network (MSCN) allowing us to ¹ º performance. The biggest challenge in cardinality estimation are express query features using sets (e.g., both A Z B Z C and ¹ º f ; ; g join-crossing correlations [16, 18]. For example, in the Internet A Z B Z C are represented as A B C ). Thus, our model does Movie Database (IMDb), French actors are more likely to partici- not waste any capacity for memorizing different permutations (all pate in romantic movies than actors of other nationalities. having the same cardinality but different costs) of a query’s fea- The question of how to better deal with this is an open area of tures, which results in smaller models and better predictions. The research. One state-of-the-art proposal in this area is Index-Based join enumeration and cost model are purposely left to the query Join Sampling (IBJS) [17] that addresses this problem by probing optimizer. qualifying base table samples against existing index structures. We evaluate our approach using the real-world IMDb dataset [16] However, like other sampling-based techniques, IBJS fails when and show that our technique is more robust than sampling-based there are no qualifying samples to start with (i.e., under selective techniques and even is competitive in the sweet spot of these base table predicates) or when no suitable indexes are available. techniques (i.e., when there are many qualifying samples). This In such cases, these techniques usually fall back to an “educated” is achieved using a (configurable) low footprint size of about 3MiB arXiv:1809.00677v2 [cs.DB] 18 Dec 2018 guess—causing large estimation errors. (whereas the sampling-based techniques have access to indexes The past decade has seen the widespread adoption of machine covering the entire database). These results are highly promising learning (ML), and specifically neural networks (deep learning), in and indicate that ML might indeed be the right hammer for the many different applications and systems. The database community decades-old cardinality estimation job. also has started to explore how machine learning can be leveraged within data management systems. Recent research therefore inves- 2 RELATED WORK tigates ML for classical database problems like parameter tuning [2], Deep learning has been applied to query optimization by three query optimization [13, 23, 27], and even indexing [12]. recent papers [13, 23, 27] that formulate join ordering as a reinforce- This article is published under a Creative Commons Attribution License ment learning problem and use ML to find query plans. This work, in (http://creativecommons.org/licenses/by/3.0/), which permits distribution and repro- duction in any medium as well as allowing derivative works, provided that you attribute contrast, applies supervised learning to solve cardinality estimation the original work to the author(s) and CIDR 2019. in isolation. This focus is motivated by the fact that modern join 9th Biennial Conference on Innovative Data Systems Research (CIDR ‘19), January 13-16, enumeration algorithms can find the optimal join order for queries 2019, Asilomar, California, USA with dozens of relations [26]. Cardinality estimation, on the other CIDR’19, January 13-16, 2019, Asilomar, California, USA Kipf et al. hand, has been called the “Achilles heel” of query optimization [21] Cardinality prediction wout and causes most of its performance issues [16]. Sigmoid Twenty years ago the first approaches to use neural networks for cardinality estimation where published for UDF predicates [14]. Linear Also, regression-based models have been used before for cardinality ReLU Concatenate estimation [1]. A semi-automatic alternative for explicit machine Average Linear output of each over set learning was presented in [22], where the feature space is parti- set module tioned using decision trees and for each split a different regres- Concat sion model was learned. These early approaches did not use deep learning nor included features derived from statistics, such as our Avg. pool Avg. pool Avg. pool sample-based bitmaps, which encode exactly which sample tuples were selected (and we therefore believe to be good starting points ReLU ReLU ReLU for learning correlations). The same holds for approaches that used Linear Linear Linear machine learning to predict overall resource consumption: running ReLU ReLU ReLU time, memory footprint, I/O, network traffic [6, 19], although these models did include course-grained features (the estimated cardinal- Linear Linear Linear ity) based on statistics into the features. Liu et al. [20] used modern Table set Tq Join set Jq Predicate set Pq ML for cardinality estimation, but did not focus on joins, which are the key estimation challenge [16]. Figure 1: Architecture of our multi-set convolutional net- Our approach builds on sampling-based estimation by includ- work. Tables, joins, and predicates are represented as sep- ing cardinalities or bitmaps derived from samples into the training arate modules, comprised of one two-layer neural network signal. Most sampling proposals create per-table samples/sketches per set element with shared parameters. Module outputs are and try to combine them intelligently in joins [3, 5, 30, 31]. While averaged, concatenated, and fed into a final output network. these approaches work well for single-table queries, they do not 2 capture join-crossing correlations and are vulnerable to the 0-tuple Similarly, we featurize joins j J with a unique one-hot encoding. ¹ º problem (cf. Section 4.2). Recent work by Müller et al. [25] aims For predicates of the form col;op;val , we featurize columns col to reduce the 0-tuple problem for conjunctive predicates (albeit at and operators op using a categorical representation with respective high computational cost), but still cannot capture the basic case of a unique one-hot vectors, and represent val as a normalized value 2 » ¼ single predicate giving zero results. Our reasonably good estimates 0; 1 , normalized using the minimum and maximum values of in 0-tuple situations make MSCN improve over sampling, includ- the respective column. ¹ º ing even the idea of estimation on materialized join samples (join Applied to the query representation Tq; Jq; Pq , our MSCN model synopses [28]), which still would not handle 0-tuple situations. (cf. Figure 1) takes the following form: 1 Í ¹ º Table module: wT = t 2Tq MLPT vt 3 LEARNED CARDINALITIES jTq j 1 From a high-level perspective, applying machine learning to the Í ¹ º Join module: w J = j 2Jq MLPJ vj cardinality estimation problem is straightforward: after training a jJq j supervised learning algorithm with query/output cardinality pairs, 1 Í ¹ º Predicate module: wP = p 2Pq MLPP vp the model can be used as an estimator for other, unseen queries. jPq j There are, however, a number of challenges that determine whether the application of machine learning will be successful: the most Merge & predict: wout = MLPout¹»wT ;w J ;wP ¼º important question is how to represent queries (“featurization”) and Figure 2

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us