
Probabilistic Database Summarization for Interactive Data Exploration Laurel Orr, Magdalena Balazinska, and Dan Suciu University of Washington Seattle, Washington, USA fljorr1, magda, [email protected] ABSTRACT ing heavy hitters. However, sampling may fail to return We present a probabilistic approach to generate a small, estimates for small populations; targeted stratified samples query-able summary of a dataset for interactive data ex- can alleviate this shortcoming, but stratified samples need ploration. Departing from traditional summarization tech- to be precomputed to target a specific query, defeating the niques, we use the Principle of Maximum Entropy to gen- original purpose of AQP. erate a probabilistic representation of the data that can be In this paper, we propose an alternative approach to in- used to give approximate query answers. We develop the teractive data exploration based on the Maximum Entropy theoretical framework and formulation of our probabilistic principle (MaxEnt). The MaxEnt model has been applied representation and show how to use it to answer queries. in many settings beyond data exploration; e.g., the multi- We then present solving techniques and give three critical plicative weights mechanism [11] is a MaxEnt model for both optimizations to improve preprocessing time and query ac- differentially private and, by [10], statistically valid answers curacy. Lastly, we experimentally evaluate our work using a to queries, and it has been shown to be theoretically opti- 5 GB dataset of flights within the United States and a 210 mal. In our setting of the MaxEnt model, the data is pre- GB dataset from an astronomy particle simulation. While processed to compute a probabilistic model. Then, queries our current work only supports linear queries, we show that are answered by doing probabilistic inference on this model. our technique can successfully answer queries faster than The model is defined as the probabilistic space that obeys sampling while introducing, on average, no more error than some observed statistics on the data and makes no other sampling and can better distinguish between rare and nonex- assumptions (Occam's principle). The choice of statistics istent values. boils down to a precision/memory tradeoff: the more statis- tics one includes, the more precise the model and the more space required. Once computed, the MaxEnt model defines 1. INTRODUCTION a probability distribution on possible worlds, and users can Interactive data exploration allows a data analyst to interact with this model to obtain approximate query re- browse, query, transform, and visualize data at \human sults. Unlike a sample, which may miss rare items, the speed" [7]. It has been long recognized that general-purpose MaxEnt model can infer something about every query. DBMSs are ill suited for interactive exploration [19]. While Despite its theoretical appeal, the computational chal- users require interactive responses, they do not necessarily lenges associated with the MaxEnt model make it difficult require precise responses because either the response is used to use in practice. In this paper, we develop the first scal- in some visualization, which has limited resolution, or an able techniques to compute and use the MaxEnt model. As approximate result is sufficient and can be followed up with an application, we illustrate it with interactive data explo- a more costly query if needed. Approximate Query Process- ration. Our first contribution is to simplify the standard ing (AQP) refers to a set of techniques designed to allow fast MaxEnt model to a form that is appropriate for data sum- but approximate answers to queries. All successful AQP sys- marization (Sec. 3). We show how to simplify the MaxEnt tems to date rely on sampling or a combination of sampling model to be a multi-linear polynomial that has one mono- and indexes. The sample can either be computed on-the-fly, mial for each possible tuple (Sec. 3, Eq. (5)) rather than its e.g., in the highly influential work on online aggregation [12] na¨ıve form that has one monomial for each possible world or systems like DBO [14] and Quickr [16], or precomputed (Sec. 2, Eq. (2)). Even with this simplification, the MaxEnt offline, like in BlinkDB [2] or Sample+Seek [9]. Samples model starts by being larger than the data. For example, the flights dataset is 5 GB, but the number of possible tuples have the advantage that they are easy to compute, can ac- 10 curately estimate aggregate values, and are good at detect- is approximately 10 , more than 5 GB. Our first optimiza- tion consists of a compression technique for the polynomial of the MaxEnt model (Sec 4.1); for example, for the flights dataset, the summary is below 200MB, while for our larger This work is licensed under the Creative Commons Attribution- dataset of 210GB, it is less than 1GB. Our second optimiza- NonCommercial-NoDerivatives 4.0 International License. To view a copy tion consists of a new technique for query evaluation on the of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For MaxEnt model (Sec. 4.2) that only requires setting some any use beyond those covered by this license, obtain permission by emailing variables to 0; this reduces the runtime to be on average [email protected]. Proceedings of the VLDB Endowment, Vol. 10, No. 10 below 500ms and always below 1s. Copyright 2017 VLDB Endowment 2150-8097/17/06. 1154 We find that the main bottleneck in using the MaxEnt In contrast to typical probabilistic databases where the model is computing the model itself; in other words, com- probability of a relation is calculated from the probability puting the values of the variables of the polynomial such that of each tuple, we calculate a relation's probability from a it matches the existing statistics over the data. Solving the formula derived from the MaxEnt principle and a set of MaxEnt model is difficult; prior work for multi-dimensional constraints on the overall distribution. This approach cap- histograms [18] uses an iterative scaling algorithm for this tures the idea that the distribution should be uniform except purpose. To date, it is well understood that the MaxEnt where otherwise specified by the given constraints. model can be solved by reducing it to a convex optimization problem [23] of a dual function (Sec. 2), which can be solved 2.2 The Principle of Maximum Entropy using Gradient Descent. However, even this is difficult given The Principle of Maximum Entropy (MaxEnt) states that the size of our model. We managed to adapt a variant of subject to prior data, the probability distribution which best Stochastic Gradient Descent called Mirror Descent [5], and represents the state of knowledge is the one that has the our optimized query evaluation technique can compute the largest entropy. This means given our set of possible worlds, MaxEnt model for large datasets in under a day. PWD, the probability distribution Pr(I) is one that agrees In summary, in this paper, we develop the following new with the prior information on the data and maximizes techniques: X − Pr(I) log(Pr(I)) • A closed-form representation of the probability space of I2PWD possible worlds using the Principle of Maximum Entropy, and a method to use the representation to answer queries where I is a database instance, also called possible world. P in expectation (Sec 3). The above probability must be normalized, I Pr(I) = 1, • Compression method for the MaxEnt summary (Sec 4.1). and must satisfy the prior information represented by a set of k expected value constraints: • Optimized query processing techniques (Sec 4.2). • A new method for selecting 2-dimensional statistics based sj = E[φj (I)]; j = 1; k (1) on a modified KD-tree (Sec 4.3). where sj is a known value and φj is a function on I that We implement the above techniques in a prototype sys- returns a numerical value in R. One example constraint is tem that we call EntropyDB and evaluate it on the flights that the number of flights from CA to WI is 0. and astronomy datasets. We find that EntropyDB can an- Following prior work on the MaxEnt principle and solving swer queries faster than sampling while introducing no more constrained optimization problems [4, 23, 20], the MaxEnt error, on average, and does better at identifying small pop- probability distribution takes the form ulations. k ! 1 X Pr(I) = exp θ φ (I) (2) 2. BACKGROUND Z j j j=1 We summarize data by fitting a probability distribution over the active domain. The distribution assumes that the where θj is a parameter and Z is the following normalization domain values are distributed in a way that preserves given constant: statistics over the data but are otherwise uniform. k !! def X X For example, consider a data scientist who analyzes a Z = exp θj φj (I) : dataset of flights in the United States for the month of De- I2PWD j=1 cember 2013. All she knows is that the dataset includes To compute the k parameters θj , we must solve the non- all flights within the 50 possible states and that there are linear system of k equations, Eq. (1), which is computa- 500,000 flights in total. She wants to know how many of tionally difficult. However, it turns out [23] that Eq. (1) is those flights are from CA to NY. Without any extra infor- equivalent to @Ψ=@θ = 0 where the dual Ψ is defined as: mation, our approach would assume all flights are equally j likely and estimate that there are 500; 000=502 = 200 flights. k def X Now suppose the data scientist finds out that flights leav- Ψ = sj θj − ln (Z) : ing CA only go to NY, FL, or WA.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-