
Database Benchmarking for Supporting Real-Time Interactive Querying of Large Data Leilani Battle Philipp Eichmann Marco Angelini University of Maryland∗ Brown University University of Rome “La Sapienza”∗∗ College Park, Maryland Providence, Rhode Island Rome, Italy [email protected] [email protected] [email protected] Tiziana Catarci Giuseppe Santucci Yukun Zheng University of Rome “La Sapienza”∗∗ University of Rome “La Sapienza”∗∗ University of Maryland∗ [email protected] [email protected] [email protected] Carsten Binnig Jean-Daniel Fekete Dominik Moritz Technical University of Darmstadt Inria, Univ. Paris-Saclay, CNRS University of Washington Darmstadt, Germany Orsay, France Seattle, Washington [email protected]. [email protected] [email protected] de ABSTRACT CCS CONCEPTS In this paper, we present a new benchmark to validate the • Information systems → Data management systems; suitability of database systems for interactive visualization Data analytics; • Human-centered computing → Visu- workloads. While there exist proposals for evaluating data- alization systems and tools. base systems on interactive data exploration workloads, none rely on real user traces for database benchmarking. To this ACM Reference Format: end, our long term goal is to collect user traces that repre- Leilani Battle, Philipp Eichmann, Marco Angelini, Tiziana Catarci, sent workloads with different exploration characteristics. In Giuseppe Santucci, Yukun Zheng, Carsten Binnig, Jean-Daniel this paper, we present an initial benchmark that focuses on Fekete, and Dominik Moritz. 2020. Database Benchmarking for Supporting Real-Time Interactive Querying of Large Data. In Pro- “crossfilter”-style applications, which are a popular interac- ceedings of the 2020 ACM SIGMOD International Conference on Man- tion type for data exploration and a particularly demanding agement of Data (SIGMOD’20), June 14–19, 2020, Portland, OR, USA. scenario for testing database system performance. We make ACM, New York, NY, USA, 17 pages. https://doi.org/10.1145/3318464. our benchmark materials, including input datasets, inter- 3389732 action sequences, corresponding SQL queries, and analysis code, freely available as a community resource, to foster fur- 1 INTRODUCTION ther research in this area: https://osf.io/9xerb/?view_only= Motivation. The data science process often begins with 81de1a3f99d04529b6b173a3bd5b4d23. users (i.e., analysts, data scientists) exploring possibly mas- sive amounts of data through interactions with a graphical user interface, oftentimes a data visualization tool [4, 9, 12, 66]. However, each time a user interacts with a visualization Permission to make digital or hard copies of all or part of this work for interface, the underlying data must be processed (filtered, personal or classroom use is granted without fee provided that copies aggregated, etc.) such that the interface can quickly provide are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights a visual response [39, 42, 56]. To meet this growing demand for components of this work owned by others than the author(s) must for interactive and real-time performance, the database and be honored. Abstracting with credit is permitted. To copy otherwise, or visualization communities have developed a variety of tech- republish, to post on servers or to redistribute to lists, requires prior specific niques, including approximate query processing [2, 14], on- permission and/or a fee. Request permissions from [email protected]. line aggregation/progressive visualization [3, 19, 26], data SIGMOD’20, June 14–19, 2020, Portland, OR, USA cubes [8, 36, 38], spatial indexing [63], speculative query © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. execution [8, 30], and lineage tracking [53]. ACM ISBN 978-1-4503-6735-6/20/06...$15.00 However, we still lack adequate benchmarks to empirically https://doi.org/10.1145/3318464.3389732 assess which of these resulting systems provide satisfactory SIGMOD’20, June 14–19, 2020, Portland, OR, USA L. Battle et al. data exploration workloads, this paper thus presents a bench- mark constructed from real traces of users interacting with an information visualization system. In this scenario, every query corresponds to an actual interaction from a real user. Given the innumerable analysis scenarios supported by vi- sualization tools [23, 33], it would be unrealistic to have one benchmark represent all possible use cases. Instead, we be- lieve that a database benchmark ecosystem is required for evaluating interactive data exploration workloads, similar to the TPC benchmarks for classical database work- loads (OLTP, OLAP, etc.). Our long term goal is to collect user traces, representing workloads with different exploration Figure 1: An example crossfilter application from our characteristics. As a first step, we mainly use “crossfilter”- user study, visualizing the Flights dataset, composed style applications to collect user traces and derive our bench- of six numerical attributes. Each attribute is visual- mark (see Figure 1). The benchmark development process ized as a histogram that can be filtered using a range- presented in this paper provides a blueprint for creating more slider. When the user interacts with a range slider, the interaction-focused benchmarks in the future. entire dataset is filtered, the bins in each histogram Given how easy and intuitive they are to use when ex- are recomputed, and the visualizations are updated. ploring complex datasets, Crossfilter interfaces are pervasive in information visualization tools including foundational performance, and which systems are actually better than oth- systems like the Attribute Explorer [67] and Spotfire [58] ers for interactive, real-time querying scenarios. This issue to recent systems like Falcon [41]. Moreover, compared to is exacerbated for the most demanding yet popular visualiza- other interaction designs (e.g., mouse hovers or check-box se- tion scenarios such as crossfilter [41, 58, 67], where one in- lection), crossfilter-style applications are arguably the teraction may generate hundreds of queries per second, with most demanding use case for database systems [41]. an expectation of near-immediate results. Unfortunately, ex- With one single interaction, a user can generate hundreds of isting database benchmarks such as TPC-H [65], TPC-DS database queries per second when moving a single crossfilter [64], or the Star Schema Benchmark (SSB) [47] are insuffi- range-slider. For example, Figure 1 shows information about cient for making these comparisons. One main reason is that airline delays; changing the selection of the flight distance us- the workloads modeled in these benchmarks are not rep- ing the crossfilter range-slider of the histogram in the upper resentative of how database queries are generated through row (center), causes all other histograms to update at each user interactions, such as with tools like Tableau [59] or mouse drag. Crossfilter interfaces are instances of dynamic Spotfire [58]. In contrast, visualization benchmarks provide query interfaces [57], which require live updates from the realistic scenarios, but lack the configurability, precision and underlying database system as the slider moves in real time. automation afforded by database benchmarks when test- In the following, we discuss the contributions of this paper: ing performance [7]. Furthermore, there exists no evaluation As a first contribution, we present the results ofa platform to test database systems under a range of interactive user study conducted by the different international re- analysis conditions, such as different dataset characteristics, search groups who authored this article. In this user exploration scenarios, interaction paradigms, or user profiles. study, we asked 22 users with a broad range of data sci- Therefore, our community is still woefully unequipped to ence experience to perform four representative interactive answer the question “are database systems truly capable of analysis tasks using a crossfilter visualization tool, across supporting real-time interactive data exploration at scale?” three different datasets, producing a total of 128 different Here, real-time means with a latency < 100 ms, or support- traces for further study. Using the log data and video record- ing at least ten frames per second [15, 41, 42]. Unfortunately, ings collected from the study, we analyze and summarize our study reveals more work is needed from the database our findings regarding the interaction patterns observed, and community to answer positively. their effects on the resulting query workload. This characteri- Contributions. While there exist proposals [7, 18, 28, 29, zation analysis provides useful insights into how interaction 61] for evaluating database systems on interactive data explo- patterns can be taken into account when evaluating and ration workloads, none of these approaches derive a bench- optimizing queries on a database system. mark from real user traces, introducing potential artifacts As a second contribution, we present the results of and possibly missing key behavioral patterns. To validate our analysis of the 128 collected traces and character- the suitability of database systems for interactive and visual ize the behavior of typical users on interactive tasks. Database Benchmarking for Real-Time Interactive
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-