
Cross-Engine Query Execution in Federated Database Systems by Ankush M. Gupta S.B., Massachusetts Institute of Technology (2016) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Science and Electrical Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2016 c Ankush M. Gupta, MMXVI. All rights reserved. The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created. Author.............................................................. Department of Electrical Engineering and Computer Science May 18, 2016 Certified by. Michael Stonebraker Adjunct Professor Thesis Supervisor Accepted by . Christopher J. Terman Chairman, Department Committee on Graduate Theses 2 Cross-Engine Query Execution in Federated Database Systems by Ankush M. Gupta Submitted to the Department of Electrical Engineering and Computer Science on May 18, 2016, in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Science and Electrical Engineering Abstract Duggan et al.[5] have created a reference implementation of the BigDAWG system: a new architecture for future Big Data applications, guided by the philosophy that \one size does not fit all". Such applications not only call for large-scale analytics, but also for real-time streaming support, smaller analytics at interactive speeds, data visualization, and cross-storage-system queries. The importance and effectiveness of such a system has been demonstrated in a hospital application using data from an intensive care unit (ICU). In this report, we implement and evaluate a concrete version of a cross-system Query Executor and its interface with a cross-system Query Planner. In particular, we focus on cross-engine shuffle joins within the BigDAWG system. Thesis Supervisor: Michael Stonebraker Title: Adjunct Professor 3 4 Acknowledgments I would like to express my gratitude to my Thesis Advisor, Professor Michael Stone- braker, for his guidance, support, and patience with my project this past year. I would also like to thank my Research Supervisor, Vijay Gadepally, for his support and insight. Finally, I would like to thank Jennie Duggan, Aaron Elmore, Adam Dziedzic, Jack She, and Peinan Chen (along with Mike, Vijay, and the rest of the BigDAWG team), whose efforts on BigDAWG provided the framework upon which I was able to complete my research. 5 THIS PAGE INTENTIONALLY LEFT BLANK 6 Contents 1 Introduction 13 2 Background 15 2.1 BigDAWG System Architecture . 15 2.1.1 System Modules . 16 2.1.2 Supported Queries . 18 2.2 Skew in Data Distribution . 18 3 Shuffle Join Framework 19 3.1 Execution Pipeline . 19 4 Skew Examination 21 4.1 Overview . 21 4.2 Full Table Scan . 22 4.3 Table Sampling . 23 4.3.1 Engine-collected Statistics . 23 5 Join-Unit Assignment 25 5.1 Overview . 25 5.2 Full Table Assignment . 26 5.2.1 Destination Full Broadcast . 27 5.2.2 Minimal Full Broadcast . 27 5.3 Join-Attribute Assignment . 28 5.3.1 Hash Assignment . 28 7 5.3.2 Minimum Bandwidth Heuristic . 29 5.3.3 Tabu Search . 29 6 Experimental Results 35 6.1 Experimental Setup . 35 6.2 Skew Examination Strategy Evaluation . 36 6.3 Join-Unit Assignment Strategy Evaluation . 39 7 Future Work 41 7.1 Shuffle Joins across 3+ Engines . 41 7.2 Probabilistic Data Structures . 41 7.3 Cost Estimation by Training . 42 7.4 Parallelizing Alignment and Comparison . 42 7.5 Inter-Island Shuffle Join Execution . 42 8 Conclusion 45 8 List of Figures 2-1 Data flow across all modules of BigDAWG . 16 6-1 Test database schema . 36 6-2 Join performance with varying skew and skew examination strategies, using Minimum Bandwidth Heuristic assignment . 37 6-3 Join performance with varying skew and skew examination strategies, using Tabu Search assignment . 38 6-4 Join with varying skew and assignment strategies . 39 9 THIS PAGE INTENTIONALLY LEFT BLANK 10 List of Algorithms 1 Tabu Search . 30 2 Engine Unburdening . 30 3 Tabu Search per-engine costs . 32 4 Tabu Search total cost . 32 11 THIS PAGE INTENTIONALLY LEFT BLANK 12 Chapter 1 Introduction The recent trend of development within the database community has been towards a proliferation of data models and data management systems, many targeted for a well-defined vertical market [2, 10]. Relational column stores are poised to take over the data warehouse market. High-throughput OLTP workloads are largely moving to main-memory SQL systems. There has been an explosion of NoSQL engines exploiting a range of data models (typically operating on flexible storage formats such as JSON). OLTP and stream processing engines are poised to satisfy the need for real-time stream processing and analytics for the internet of things. Because they are orders of magnitude faster for their specialized data models than other database systems, these specialized systems exemplify the adage that \one size does not fit all" for data management solutions[15]. It is not uncommon to see applications that take advantage of more than one of these systems to support their complex workloads. For example, such an infras- tructure can be designed around MIMIC II[14], a publicly available dataset covering 26,000 intensive care unit admissions at Boston's Beth Israel Deaconess Hospital. This dataset includes waveform data (up to 125 Hz measurements from bedside devices), patient demographic information (name, age, etc.), doctor's and nurse's notes (raw text), clinic reports and lab results, and prescriptions filled (semi-structured data). In a real-world system, it would be prudent to split the waveform data even further into real-time streams and historic archive data, for both storage and compression 13 reasons, as well as the ability to have precomputed relevant analytics for faster access. Given the variety of data sources, this system must support a cornucopia of data types and access methods { ranging from standard SQL (e.g., how many patients were given a particular drug), to complex analytics (e.g., computing the FFT of a patient's realtime waveform data and comparing it to the historic \normal"), and even text search (e.g., finding patients who responded well to a particular treatment). Our reference implementation of the system uses SciDB[16] to store MIMIC II's time series data, Accumulo[1] to store text, and Postgres[17] to store patient infor- mation. The nature of these independent systems with correlated data implies that there will be queries that span two or more storage engines. For example, to locate the diagnoses of patients with irregular heart rhythms, one would query SciDB and Postgres. To find doctor's notes associated with a prescription drug, one would query Accumulo and Postgres. To run analytics on particular cohorts of patients, one would query Postgres and SciDB. It is clear that a system designed to coordinate several distinct database engines will be required in a myriad of future applications, especially those that are broad in scope like MIMIC II. The concept of such a polystore federation has been discussed in the BigDAWG project[7, 5], and a concrete implementation of the BigDAWG system's Query Executor module is detailed in this report. 14 Chapter 2 Background In this chapter, we introduce the basic concepts and structure of the BigDAWG system. We begin with an overview of its data storage and representation model. Next, we will explore an overview of the modules that allow BigDAWG to operate on these underlying data models. The chapter concludes with a look at the challenges and opportunities presented by skew when attempting to execute queries on the underlying data. 2.1 BigDAWG System Architecture The BigDAWG system is designed to offer users an interface that is agnostic to the location of data, so that users are able to interface with the underlying database engines while not being hampered by the specifics of which data is located on which engine. BigDAWG utilizes the concept of islands of information to support this idea. Each island serves as an abstraction for a federated system consisting of multiple database engines. Islands include a query language, data model, and mechanisms for interacting with the federated storage engines. Users are able to run queries against islands without requiring any knowledge of the underlying engines, and BigDAWG is responsible for performing them seamlessly. Users can also write queries that explic- itly convert data from one Island to another, and BigDAWG handles the conversion as necessary. 15 2.1.1 System Modules The BigDAWG system has four core modules: the Query Planner, the Performance Monitor, the Data Migration and the Query Executor (see Figure 2-1 for the organi- zation and data flow of the system). Figure 2-1: Data flow across all modules of BigDAWG Planner The Planner module of BigDAWG is responsible for parsing a user's input and con- structing a logical query plan in the form of a dependency tree of action nodes. Actions in the logical plan include querying a specific engine, performing a Join op- eration, and performing a Union operation. The logical plan additionally specifies an end-destination (in the form of a table name and an engine) that designates where the query results should be stored. The Planner generates multiple possible logical plans and interfaces with the Mon- itor module in order to determine which of the candidate logical query plans should be run by the Executor module. Monitor The Monitor module logs the execution times of logical query plans as they are performed by the Executor. It utilizes a query signature system to relate new plans to previously executed ones, and uses these relations in order to determine which of 16 the provided logical plan is the most efficient.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages48 Page
-
File Size-