On-Demand ETL for Real-Time Analytics

Total Page:16

File Type:pdf, Size:1020Kb

On-Demand ETL for Real-Time Analytics TECHNISCHE UNIVERSITÄT KAISERSLAUTERN PHDDISSERTATION On-Demand ETL for Real-Time Analytics Thesis approved by the Department of Computer Science Technische Universität Kaiserslautern for the award of the Doctoral Degree Doctor of Engineering (Dr.-Ing.) to Dipl.-Inf. Weiping Qu Date of Defense: June 9, 2020 Dean: Prof. Dr. Jens Schmitt Reviewer: Prof. Dr.-Ing. Stefan Deßloch Prof. Dr.-Ing. Bernhard Mitschang D 386 For my father, mother, parents-in-law, my wife and my daughter(s) iii Acknowledgements This dissertation summarizes my research work during the period when I worked as a member of the Heterogeneous Information Systems (HIS) group at the University of Kaiserslautern while it was finished in the past two years when I am working in the Online Computing group at Ant Fi- nancial in Shanghai, China after I returned back from Germany. I think I am fortunate that the 5-year research experience helped me to successful- ly take the fist step towards industry while the most significant two-year working experience in turn adds more nutrition to this dissertation. First, I would like to thank my advisor, Prof. Dr.-Ing Stefan Deßloch for offering me the opportunity of pursuing my research work in the most interesting area in the world: database and information systems. His gui- dance on the research and teaching work helped me to complete my view in scientific research in a more systematic manner. In addition to work, I want to express my deepest gratitude to him, as he delivered strong sup- port when I was facing my family issues during that tough time and also encouraged and guided me to finish this dissertation when I was strugg- ling with my work at Ant Financial. Next, I would like to thank Prof. Dr.-Ing. Bernhard Mitschang for taking the role of the second examiner and Prof. Dr. Christoph Garth for acting as chair of the PhD committee. Furthermore, I would like to thank my first mentor Dr. Tianchao Li, who guided me with my diploma thesis at IBM Böblingen and opened up the door of database research for me. Besides, I want to thank Prof. Dr.-Ing. Dr.h.c. Theo Härder who shows his great passion in the database rese- arch area and serves as a great example to the young generations. Moreo- ver, I am also grateful to Prof. Dr.-Ing Sebastian Michel for his interesting thoughts and valuable discussions. I am grateful to my master thesis students: Nikolay Grechanov, Dipesh Dangol, Sandy Ganza, Vinanthi Basavaraj and Sahana Shankar, who also contributed their efforts to this dissertation with their solid knowledge and highly motivated attitude. I am indebted to my former colleagues and friends, especially Dr. Tho- mas Jörg, Dr. Yong Hu, Dr. Johannis Schildgen, Dr. Sebastian Bächle, Dr. v Yi Ou, Dr. Caetano Sauer, Dr. Daniel Shall, Dr. Kiril Panev, Dr. Evica Mil- chevski, Dr. Koninika Pal, Manuel Dossinger, Steffen Reithermann and Heike Neu, with whom I had a great time in Kaiserslautern. Also, I would like to thank my best friends, Xiao Pan, Dr. Yun Wan and Yanhao He, who always have interesting ideas and made every day of these years in Kaiserlautern sparkling. Finally, I would like to express my love to my family who always sup- ports me behind the curtain. I want to thank my parents, She Qu and Xingdong Zhang, who educated me the most strictly and love me the most deeply. Without them, I would never have gone that far. I want to thank my wife, Jingxin Xie, who always stands beside me and encourages, motivates and supports me towards the final success of my PhD carrier, and also thank her for bringing our precious daughter to this world, who really brings our life to a brand new world and makes it enjoyable. Weiping Qu Shanghai, Jan 2020 vi vii Abstract In recent years, business intelligence applications become more real-time and traditional data warehouse tables become fresher as they are conti- nuously refreshed by streaming ETL jobs within seconds. Besides, a new type of federated system emerged that unifies domain-specific computa- tion engines to address a wide range of complex analytical applications, which needs streaming ETL to migrate data across computation systems. From daily-sales reports to up-to-the-second cross-/up-sell campaign activities, we observed various latency and freshness requirements set in these analytical applications. Hence, streaming ETL jobs with regu- lar batches are not flexible enough to fit in such a mixed workload. Jobs with small batches can cause resource overprovision for queries with low freshness needs while jobs with large batches would starve queries with high freshness needs. Therefore, we argue that ETL jobs should be self- adaptive to varying SLA demands by setting appropriate batches as nee- ded. The major contributions are summarized as follows. • We defined a consistency model for “On-Demand ETL” which ad- dresses correct batches for queries to see consistent states. Further- more, we proposed an “Incremental ETL Pipeline” which reduces the performance impact of on-demand ETL processing. • A distributed, incremental ETL pipeline (called HBelt) was introdu- ced in distributed warehouse systems. HBelt aims at providing con- sistent, distributed snapshot maintenance for concurrent table scans across different analytics jobs. • We addressed the elasticity property for incremental ETL pipeline to guarantee that ETL jobs with batches of varying sizes can be fi- nished within strict deadlines. Hence, we proposed Elastic Queue Middleware and HBaqueue which replace memory-based data ex- change queues with a scalable distributed store - HBase. • We also implemented lazy maintenance logic in the extraction and the loading phases to make these two phases workload-aware. Besi- des, we discuss how our “On-Demand ETL” thinking can be exploi- ted in analytic flows running on heterogeneous execution engines. ix Contents 1 Introduction1 1.1 State-of-the-art ETL.......................1 1.2 Motivation............................4 1.2.1 Working Scope.....................4 1.2.2 Problem Analysis....................5 1.2.3 On-Demand ETL....................7 1.3 Research Issues.........................8 1.4 Outline.............................. 10 2 Preliminaries 13 2.1 Incremental View Maintenance................ 13 2.2 Near Real-time Data Warehouses............... 14 2.2.1 Incremental ETL.................... 15 2.2.2 Near real-time ETL................... 15 2.3 Lazy Materialized View Maintenance............ 16 2.3.1 Right-time ETL..................... 17 2.3.2 Relaxed Currency in SQL............... 18 2.4 Real-time Data Warehouse................... 19 2.4.1 One-size-fits-all Databases............... 19 2.4.2 Polystore & Streaming ETL.............. 20 2.4.3 Stream Warehouse................... 20 2.5 Streaming Processing Engines................. 21 2.5.1 Aurora & Borealis.................... 21 2.5.2 Apache Storm...................... 23 2.5.3 Apache Spark Streaming................ 25 2.5.4 Apache Flink...................... 27 2.5.5 S-store.......................... 29 2.5.6 Pentaho Kettle...................... 30 2.6 Elasticity in Streaming Engines................ 30 2.7 Scalable NoSQL Data Store: HBase.............. 32 2.7.1 System Design..................... 32 2.7.2 Compaction....................... 34 2.7.3 Split & Load Balancing................. 34 2.7.4 Bulk Loading...................... 35 xi Contents 3 Incremental ETL Pipeline 37 3.1 The Computational Model................... 37 3.2 Incremental ETL Pipeline................... 39 3.3 The Consistency Model..................... 41 3.4 Workload Scheduler...................... 43 3.5 Operator Thread Synchronization and Coordination.... 46 3.5.1 Pipelined Incremental Join.............. 46 3.5.2 Pipelined Slowly Changing Dimensions....... 49 3.5.3 Discussion........................ 53 3.6 Experiments........................... 53 3.6.1 Incremental ETL Pipeline Performance....... 54 3.6.2 Consistency-Zone-Aware Pipeline Scheduling... 56 3.7 Summary............................. 58 4 Distributed Snapshot Maintenance in Wide-Column NoSQL Databases 61 4.1 Motivation............................ 61 4.2 The HBelt System for Distributed Snapshot Maintenance. 63 4.2.1 Architecture Overview................. 63 4.2.2 Consistency Model................... 65 4.2.3 Distributed Snapshot Maintenance for Concurrent Scans........................... 68 4.3 Experiments........................... 70 4.4 Summary............................. 74 5 Scaling Incremental ETL Pipeline using Wide-Column NoS- QL Databases 75 5.1 Motivation............................ 75 5.2 Elastic Queue Middleware................... 77 5.2.1 Role of EQM in Elastic Streaming Processing Engines 77 5.2.2 Implementing EQM using HBase........... 79 5.3 EQM Prototype: HBaqueue.................. 82 5.3.1 Compaction....................... 82 5.3.2 Resalting......................... 83 5.3.3 Split Policy....................... 84 5.3.4 Load balancing..................... 85 5.3.5 Enqueue & Dequeue Performance.......... 89 5.4 Summary............................. 90 6 Workload-aware Data Extraction 93 6.1 Motivation............................ 93 xii Contents 6.2 Problem Analysis........................ 96 6.2.1 Terminology....................... 96 6.2.2 Performance Gain................... 100 6.3 Workload-aware CDC..................... 100 6.4 Experiments........................... 104 6.4.1 Experiment Setup.................... 104 6.4.2 Trigger-based CDC Experiment............ 104 6.4.3 Log-based CDC Experiment.............. 105 6.4.4 Timestamp-based CDC Experiment......... 107 6.5 Summary............................
Recommended publications
  • Improving High Contention OLTP Performance Via Transaction
    Improving High Contention OLTP Performance via Transaction Scheduling Guna Prasaad, Alvin Cheung, Dan Suciu University of Washington {guna, akcheung, suciu}@cs.washington.edu ABSTRACT Validation-based protocols [5, 26] are known to be well- Research in transaction processing has made significant suited for workloads with low data contention or conflicts, i.e., progress in improving the performance of multi-core in- when data items are being accessed by transactions concur- memory transactional systems. However, the focus has mainly rently, with at least one access being a write. Since conflicting been on low-contention workloads. Modern transactional accesses are rare, it is unlikely that the value of a data item systems perform poorly on workloads with transactions ac- is updated by another transaction during its execution, and cessing a few highly contended data items. We observe that hence validation mostly succeeds. On the other hand, for most transactional workloads, including those with high workloads where data contention is high, locking-based con- contention, can be divided into clusters of data conflict-free currency control protocols are generally preferred as they transactions and a small set of residuals. pessimistically block other transactions that require access In this paper, we introduce a new concurrency control to the same data item instead of incurring the overhead of protocol called Strife that leverages the above observation. repeatedly aborting and restarting the transaction like in Strife executes transactions in batches, where each batch is OCC. When the workload is known to be partitionable, parti- partitioned into clusters of conflict-free transactions and a tioned concurrency control [12] is preferred as it eliminates small set of residual transactions.
    [Show full text]
  • A High-Performance, Distributed Main Memory Transaction Processing System
    H-Store: A High-Performance, Distributed Main Memory Transaction Processing System Robert Kallman Evan P. C. Jones John Hugg Hideaki Kimura Samuel Madden Vertica Inc. Jonathan Natkins Michael Stonebraker [email protected] Andrew Pavlo Yang Zhang Alexander Rasin Massachusetts Institute of Stanley Zdonik Technology Daniel J. Abadi Brown University Yale University {evanj, madden, {rkallman, hkimura, stonebraker, [email protected] jnatkins, pavlo, alexr, yang}@csail.mit.edu sbz}@cs.brown.edu ABSTRACT sponsibilities across multiple shared-nothing machines. Al- Our previous work has shown that architectural and appli- though some RDBMS platforms now provide support for cation shifts have resulted in modern OLTP databases in- this paradigm in their execution framework, research shows creasingly falling short of optimal performance [10]. In par- that building a new OLTP system that is optimized from ticular, the availability of multiple-cores, the abundance of its inception for a distributed environment is advantageous main memory, the lack of user stalls, and the dominant use over retrofitting an existing RDBMS [2]. of stored procedures are factors that portend a clean-slate Using a disk-oriented RDBMS is another key bottleneck redesign of RDBMSs. This previous work showed that such in OLTP databases. All but the very largest OLTP applica- a redesign has the potential to outperform legacy OLTP tions are able to fit their entire data set into the memory of databases by a significant factor. These results, however, a modern shared-nothing cluster of server machines. There- were obtained using a bare-bones prototype that was devel- fore, disk-oriented storage and indexing structures are un- oped just to demonstrate the potential of such a system.
    [Show full text]
  • Arxiv:1612.05566V1 [Cs.DB] 16 Dec 2016
    Building Efficient Query Engines in a High-Level Language Amir Shaikhha, École Polytechnique Fédérale de Lausanne Yannis Klonatos, École Polytechnique Fédérale de Lausanne Christoph Koch, École Polytechnique Fédérale de Lausanne Abstraction without regret refers to the vision of using high-level programming languages for systems de- velopment without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance, instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend. In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level programming language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes the entire query engine by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other. We evaluate our approach with the TPC-H benchmark and show that: (a) With all optimizations en- abled, our architecture significantly outperforms a commercial in-memory database as well as an existing query compiler.
    [Show full text]
  • Jennie Duggan (Née Rogers)
    Jennie Duggan (n´eeRogers) MIT CSAIL Voice: omitted for web 32 Vasser Street, Room G904B E-mail: [email protected] Cambridge, MA 02139 Web: http://people.csail.mit.edu/jennie/ Interests Large-scale data processing, cloud computing, core database internals, scientific data management, database performance, big data Education Brown University December 2012 Ph.D., Computer Science • Thesis: \Query Performance Prediction for Analytical Workloads" • Advisor: U˘gurC¸etintemel Brown University May 2009 Sci.M., Computer Science • Thesis: \Towards a Generic Compression Advisor" • Advisor: U˘gurC¸etintemel Rensselaer Polytechnic Institute May 2003 B.S., Computer Science • Minor: Brain & Brain Behavior Research MIT CSAIL January 2013{Present Experience Postdoctoral Associate Data placement for elastic array processing (SciDB): • Researched the efficient reorganization of data for scalable array analytics • Designed a control loop for incrementally scaling out a scientific database cluster • Proposed partitioning schemes to maximize the preservation of array space for spatial querying Data placement for elastic distributed main memory databases (H-Store): • Investigated the management of execution skew for large-scale transactional workloads • Created algorithms for determining when and how to reprovision hardware resources for con- sistent query throughput • Built a two-level caching scheme that caters to hot spots in a transactional workload Massively parallelized join optimization (SciDB): • Proposed a two-phase join for arrays independently optimizing
    [Show full text]
  • Scheduling OLTP Transactions Via Learned Abort Prediction
    Scheduling OLTP Transactions via Learned Abort Prediction Yangjun Sheng Anthony Tomasic Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA Pittsburgh, PA [email protected] [email protected] Tieying Zhang Andrew Pavlo Alibaba Database Group Carnegie Mellon University [email protected] Pittsburgh, PA ABSTRACT CCS CONCEPTS Current main memory database system architectures are still • Information systems → Database transaction process- challenged by high contention workloads and this challenge ing; • Computing methodologies → Machine learning; will continue to grow as the number of cores in processors continues to increase [23]. These systems schedule transac- KEYWORDS tions randomly across cores to maximize concurrency and machine learning scheduling to produce a uniform load across cores. Scheduling never ACM Reference Format: considers potential conflicts. Performance could be improved Yangjun Sheng, Anthony Tomasic, Tieying Zhang, and Andrew if scheduling balanced between concurrency to maximize Pavlo. 2019. Scheduling OLTP Transactions via Learned Abort throughput and scheduling transactions linearly to avoid Prediction. In International Workshop on Exploiting Artificial In- conflicts. In this paper, we present the design of several intel- telligence Techniques for Data Management (aiDM’19), July 5, 2019, ligent transaction scheduling algorithms that consider both Amsterdam, Netherlands. ACM, New York, NY, USA, 8 pages. https: potential transaction conflicts and concurrency. To incor- //doi.org/10.1145/3329859.3329871 porate reasoning about transaction conflicts, we develop a supervised machine learning model that estimates the prob- 1 INTRODUCTION ability of conflict. This model is incorporated into several Transaction aborts are one of the main sources of perfor- scheduling algorithms. In addition, we integrate an unsuper- mance loss in main memory online transaction processing vised machine learning algorithm into an intelligent schedul- (OLTP) database management systems (DBMS) [23].
    [Show full text]
  • Handling Highly Contended OLTP Workloads Using Fast Dynamic
    Research 6: Transaction Processing and Query Optimization SIGMOD ’20, June 14–19, 2020, Portland, OR, USA Handling Highly Contended OLTP Workloads Using Fast Dynamic Partitioning Guna Prasaad Alvin Cheung Dan Suciu University of Washington University of California, Berkeley University of Washington [email protected] [email protected] [email protected] ABSTRACT 1 INTRODUCTION Research on transaction processing has made significant As modern OLTP servers today ship with an increasing progress towards improving performance of main memory number of cores, users expect transaction processing perfor- multicore OLTP systems under low contention. However, mance to scale accordingly. In practice, however, achieving these systems struggle on workloads with lots of conflicts. scalability is challenging, with the bottleneck often being Partitioned databases (and variants) perform well on high the concurrency control needed to ensure a serializable ex- contention workloads that are statically partitionable, but ecution of the transactions. Consequently, a new class of time-varying workloads often make them impractical. To- high performance concurrency control protocols [8, 18, 19, wards addressing this, we propose StrifeÐa novel transac- 37, 39, 44] have been developed and are shown to scale to tion processing scheme that clusters transactions together even thousands of cores [39, 43]. dynamically and executes most of them without any con- Yet, the performance of these protocols degrades rapidly currency control. Strife executes transactions in batches, on workloads with high contention [1, 43]. One promising where each batch is partitioned into disjoint clusters with- approach to execute workloads with high contention is to out any cross-cluster conflicts and a small set of residuals.
    [Show full text]
  • H-Store: a High-Performance, Distributed Main Memory Transaction Processing System
    H-Store: A High-Performance, Distributed Main Memory Transaction Processing System Robert Kallman Evan P. C. Jones John Hugg Hideaki Kimura Samuel Madden Vertica Inc. Jonathan Natkins Michael Stonebraker [email protected] Andrew Pavlo Yang Zhang Alexander Rasin Massachusetts Institute of Stanley Zdonik Technology Daniel J. Abadi Brown University Yale University {evanj, madden, {rkallman, hkimura, stonebraker, [email protected] jnatkins, pavlo, alexr, yang}@csail.mit.edu sbz}@cs.brown.edu ABSTRACT sponsibilities across multiple shared-nothing machines. Al- Our previous work has shown that architectural and appli- though some RDBMS platforms now provide support for cation shifts have resulted in modern OLTP databases in- this paradigm in their execution framework, research shows creasingly falling short of optimal performance [10]. In par- that building a new OLTP system that is optimized from ticular, the availability of multiple-cores, the abundance of its inception for a distributed environment is advantageous main memory, the lack of user stalls, and the dominant use over retrofitting an existing RDBMS [2]. of stored procedures are factors that portend a clean-slate Using a disk-oriented RDBMS is another key bottleneck redesign of RDBMSs. This previous work showed that such in OLTP databases. All but the very largest OLTP applica- a redesign has the potential to outperform legacy OLTP tions are able to fit their entire data set into the memory of databases by a significant factor. These results, however, a modern shared-nothing cluster of server machines. There- were obtained using a bare-bones prototype that was devel- fore, disk-oriented storage and indexing structures are un- oped just to demonstrate the potential of such a system.
    [Show full text]
  • Chiller: Contention-Centric Transaction Execution and Data Partitioning For
    Research 6: Transaction Processing and Query Optimization SIGMOD ’20, June 14–19, 2020, Portland, OR, USA Chiller: Contention-centric Transaction Execution and Data Partitioning for Modern Networks Erfan Zamanian Julian Shun [email protected] [email protected] Brown University MIT CSAIL Carsten Binnig Tim Kraska [email protected] [email protected] TU Darmstadt MIT CSAIL Abstract SIGMOD International Conference on Management of Data (SIG- Distributed transactions on high-overhead TCP/IP-based net- MOD’20), June 14–19, 2020, Portland, OR, USA. ACM, New York, NY, works were conventionally considered to be prohibitively USA, 16 pages. https://doi.org/10.1145/3318464.3389724 expensive and thus were avoided at all costs. To that end, 1 Introduction the primary goal of almost any existing partitioning scheme The common wisdom is to avoid distributed transactions is to minimize the number of cross-partition transactions. at almost all costs as they represent the dominating bot- However, with the new generation of fast RDMA-enabled tleneck in distributed database systems. As a result, many networks, this assumption is no longer valid. In fact, recent partitioning schemes have been proposed with the goal of work has shown that distributed databases can scale even minimizing the number of cross-partition transactions [8, 28, when the majority of transactions are cross-partition. 29, 34, 37, 44]. Yet, a recent result [43] has shown that with In this paper, we first make the case that the new bottle- the advances of high-bandwidth RDMA-enabled networks, neck which hinders truly scalable transaction processing in neither the message overhead nor the network bandwidth modern RDMA-enabled databases is data contention, and that are limiting factors anymore, significantly mitigating the optimizing for data contention leads to different partition- scalability issues of traditional systems.
    [Show full text]
  • Cross-Engine Query Execution in Federated Database Systems Ankush M. Gupta
    Cross-Engine Query Execution in Federated Database Systems by Ankush M. Gupta S.B., Massachusetts Institute of Technology (2016) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2016 c Ankush M. Gupta, MMXVI. All rights reserved. The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created. Author.............................................................. Department of Electrical Engineering and Computer Science May 18, 2016 Certified by. Michael Stonebraker Adjunct Professor Thesis Supervisor Accepted by . Christopher J. Terman Chairman, Department Committee on Graduate Theses Cross-Engine Query Execution in Federated Database Systems by Ankush M. Gupta Submitted to the Department of Electrical Engineering and Computer Science on May 18, 2016, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Duggan et al.[5] have created a reference implementation of the BigDAWG system: a new architecture for future Big Data applications, guided by the philosophy that \one size does not fit all". Such applications not only call for large-scale analytics, but also for real-time streaming support, smaller analytics at interactive speeds, data visualization, and cross-storage-system queries. The importance and effectiveness of such a system has been demonstrated in a hospital application using data from an intensive care unit (ICU). In this report, we implement and evaluate a concrete version of a cross-system Query Executor and its interface with a cross-system Query Planner.
    [Show full text]
  • Uses Data Partitioning to Distribute Execution » on Shared
    H-STORE A High-Performance, Distributed Main Memory Transaction Processing System Robert Kallman Andrew Pavlo Evan P. C. Jones Michael Stonebraker Ryan Betts John Hugg Daniel J. Abadi Jonathan Natkins Alexander Rasin Samuel Madden Yang Zhang Bobbi Heath Hideaki Kimura Stanley Zdonik Michael McCanna Overview System Design Database Cluster OLLTP Application Schema Information H-Store API » Many OLTP databases have common properties: » H-Store is a new clean-slate design: Stored Sample Procedures Workload Transaction Initiator • Main-memory row storage. Node Node NodeNNoded Node • Repetitive execution of short-lived transactions. Deployment Framework Messaging Fabric NodeNNdd Node • Designed for multi-core machines. Database Designer Other Cluster • Entire database fits in main memory of a cluster. Query Planner/Optimizer Transactionon ManagerMa Execution Nodes • One execution thread per database partition. Stored Procedure Executor • Data naturally partitions on keys (e.g., customer ids, warehouse ids) Query Execution Engine Compiled Stored Proceedures System Catalogs • Individual transaction invocations only touch a small subset of data. • Distributed operation on shared-nothing clusters. Query Plans Physical Layout Main Memory Storage Manager • Applications invoke pre-defined stored Deployment Time Runtime » The development of H-Store facilitates research into exploiting procedures consisting of structured control code System Architecture these non-trivial aspects of OLTP systems. intermixed with parameterized SQL commands. » Replication
    [Show full text]
  • P-Store: an Elastic Database System with Predictive Provisioning
    Industry 1: Adaptive Query Processing SIGMOD’18, June 10-15, 2018, Houston, TX, USA P-Store: An Elastic Database System with Predictive Provisioning Rebecca Taft∗ Nosayba El-Sayed† Yu Lu [email protected] Marco Serafini [email protected] MIT [email protected] University of Illinois mserafi[email protected] Urbana-Champaign Qatar Computing Research Institute - HBKU Ashraf Aboulnaga Michael Stonebraker Ricardo Mayerhofer [email protected] [email protected] Francisco Andrade Qatar Computing Research Institute - MIT [email protected] HBKU [email protected] B2W Digital 4 x 10 2.5 ABSTRACT 1−day OLTP database systems are a critical part of the operation of many 2 enterprises. Such systems are often configured statically with suffi- 1.5 cient capacity for peak load. For many OLTP applications, however, 1 the maximum load is an order of magnitude larger than the mini- mum, and load varies in a repeating daily pattern. It is thus prudent Load (requests/min) 0.5 0 to allocate computing resources dynamically to match demand. One 0 500 1000 1500 2000 2500 3000 3500 4000 can allocate resources reactively after a load increase is detected, but Time (min) this places additional burden on the already-overloaded system to Figure 1: Load on one of B2W’s databases over three days. reconfigure. A predictive allocation, in advance of load increases, is Load peaks during daytime hours and dips at night. clearly preferable. We present P-Store, the first elastic OLTP DBMS to use predic- 1 INTRODUCTION tion, and apply it to the workload of B2W Digital (B2W), a large Large, modern OLTP applications increasingly store their data in online retailer.
    [Show full text]
  • Faster: a Concurrent Key-Value Store with In-Place Updates
    Faster: A Concurrent Key-Value Store with In-Place Updates Badrish Chandramouliy, Guna Prasaadz∗, Donald Kossmanny, Justin Levandoskiy, James Huntery, Mike Barnett† yMicrosoft Research zUniversity of Washington [email protected], [email protected], {donaldk, justin.levandoski, jahunter, mbarnett}@microsoft.com ABSTRACT 1.1 Challenges and Existing Solutions Over the last decade, there has been a tremendous growth in data- State management is an important problem for all these applications intensive applications and services in the cloud. Data is created on and services. It exhibits several unique characteristics: a variety of edge sources, e.g., devices, browsers, and servers, and • Large State: The amount of state accessed by some applications processed by cloud applications to gain insights or take decisions. can be very large, far exceeding the capacity of main memory. For Applications and services either work on collected data, or monitor example, a targeted search ads provider may maintain per-user, and process data in real time. These applications are typically update per-ad and clickthrough-rate statistics for billions of users. Also, intensive and involve a large amount of state beyond what can fit in it is often cheaper to retain state that is infrequently accessed on main memory. However, they display significant temporal locality secondary storage [18], even when it fits in memory. in their access pattern. This paper presents Faster, a new key- • Update Intensity: While reads and inserts are common, there value store for point read, blind update, and read-modify-write are applications with significant update traffic. For example, a operations. Faster combines a highly cache-optimized concurrent monitoring application receiving millions of CPU readings every hash index with a hybrid log: a concurrent log-structured record second from sensors and devices may need to update a per-device store that spans main memory and storage, while supporting fast aggregate for each reading.
    [Show full text]