Making Transactional Key-Value Stores Verifiably

Total Page:16

File Type:pdf, Size:1020Kb

Making Transactional Key-Value Stores Verifiably Cobra: Making Transactional Key-Value Stores Verifiably Serializable Cheng Tan, Changgeng Zhao, Shuai Mu?, and Michael Walfish NYU Department of Computer Science, Courant Institute ?Stony Brook University Abstract. Today’s cloud databases offer strong properties, of its operation. Meanwhile, any internal corruption—as could including serializability, sometimes called the gold standard happen from misconfiguration, operational error, compromise, database correctness property. But cloud databases are compli- or adversarial control at any layer of the execution stack—can cated black boxes, running in a different administrative domain cause a serializability violation. Beyond that, one need not from their clients. Thus, clients might like to know whether adopt a paranoid stance (“the cloud as malicious adversary”) the databases are meeting their contract. To that end, we intro- to acknowledge that it is difficult, as a technical matter, to pro- duce cobra; cobra applies to transactional key-value stores. vide serializability and geo-distribution and geo-replication It is the first system that combines (a) black-box checking, of and high performance under various failures [40, 78, 147]. (b) serializability, while (c) scaling to real-world online trans- Doing so usually involves a consensus protocol that inter- actional processing workloads. The core technical challenge acts with an atomic commit protocol [69, 96, 103]—a com- is that the underlying search problem is computationally ex- plex combination, and hence potentially bug-prone. Indeed, pensive. Cobra tames that problem by starting with a suitable today’s production systems have exhibited serializability vio- SMT solver. Cobra then introduces several new techniques, lations [1, 18, 19, 25, 26] (see also §6.1). including a new encoding of the validity condition; hardware This leads to our core question: how can clients verify the acceleration to prune inputs to the solver; and a transaction serializability of a black-box database? To be clear, related segmentation mechanism that enables scaling and garbage col- questions have been addressed before. The novelty in our prob- lection. Cobra imposes modest overhead on clients, improves lem is in combining three aspects: over baselines by 10× in verification cost, and (unlike the base- (a) Black box, unmodified database. In our setting, the lines) supports continuous verification. Our artifact can handle database does not “know” it’s being checked; the input to the 2000 transactions/sec, equivalent to 170M/day. verification machinery will be only the inputs to, and outputs from, the database. This matches the cloud context (even when 1 Introduction and motivation the database is open source, as noted above), and contrasts with work that checks for isolation or consistency anomalies by A new class of cloud databases has emerged, including Ama- using “inside information” [62, 86, 109, 123, 130, 141, 143], zon DynamoDB and Aurora [2, 4, 133], Azure CosmosDB [7], for example, access to internal scheduling choices. Also, we CockroachDB [9], YugaByte DB [36], and others [16, 17, 21, target production workloads and standard key-value APIs (§2). 22, 69]. Compared to earlier generations of NoSQL databases (b) Serializability. We focus on serializability, in contrast to (such as Facebook Cassandra, Google Bigtable, and Amazon weaker isolation levels. Serializability has a strict variant and a S3), members of the new class offer the same scalability, avail- non-strict variant [56, 110]; in the former, the effective transac- ability, replication, and geo-distribution but in addition offer tion order must be consistent with real time. We attend to both serializable transactions [55, 110]: all transactions appear to variants in this paper. However, the weight is on the non-strict execute in a single, sequential order. variant, as it poses a more difficult computational problem; Serializability is the gold-standard isolation level [48, 77], the strict variant is “easier” because the real-time constraint and the correctness contract that many applications and pro- diminishes the space of potentially-valid execution schedules. grammers implicitly assume: their code would be incorrect if On the one hand, the majority of databases that offer seri- the database provided a weaker contract [137]. Note that serial- alizability offer the strict variant. On the other hand, check- izability encompasses weaker notions of correctness, like basic ing non-strict serializability is germane, for two reasons. First, integrity: if a returned value does not read from a valid write, some databases claim to provide the non-strict variant (in that will manifest as a non-serializable result. Serializability general [11], or under clock skew [35], or for read-only work- also implies that the database handles failures robustly: non- loads [32]), while others don’t specify the variant [3, 5]. Sec- tolerated server failures, particularly in the case of a distributed ond, the strict case can degenerate to the non-strict case. Heavy database, are a potential source of non-serializable results. concurrency, for example, means few real-time constraints, so However, a user of a cloud database can legitimately wonder the difficult computational problem re-enters. As a special whether the database in fact provides the promised contract. For case, clock drift causes otherwise ordered transactions to be one thing, users often have no visibility into a cloud database’s concurrent (§3.5,§6.1). implementation. In fact, even when the source code is avail- able [9, 16, 17, 36], that does not necessarily yield visibility: if (c) Scalability. This means, first, scaling to real-world online the database is hosted by someone else, you can’t really be sure transactional processing workloads at reasonable cost. It also means incorporating mechanisms that enable a verifier to work efficiently infer ordering relationships from a history (§3.1– incrementally and to keep up with an ever-growing history. §3.2). (We prove that cobra’s encoding is a valid reduction in However, aspects (a) and (b) set up a challenge: check- Appendix B [132].) Second, cobra uses parallel hardware (our ing black-box serializability has long been known to be implementation uses GPUs; §5) to compute all-pairs reach- NP-complete [54, 110]. Recent work of Biswas and Enea ability over a graph whose nodes are transactions and whose (BE) [59] lowered the complexity to polynomial time, under edges are known precedence relationships; then, cobra re- natural restrictions (which hold in our context); see also pio- solves some of the constraints efficiently, by testing whether a neering work by Sinha et al. [124] (§7). However, these two candidate edge would generate a cycle with an existing path. approaches don’t meet our goal of scalability. For example, 2. Scaling to a continuous and ever-growing history (§4). in BE, the number of clients appears in the exponent of the Online cloud databases run in a continuous fashion, where the algorithm’s running time (§6,§7) (e.g., 14 clients means the corresponding history is uninterrupted and grows unbound- algorithm is O(n14)). Furthermore, even if there were a small edly. To support online databases, cobra verifies in rounds. number of clients, BE does not include mechanisms for han- From round-to-round, the verifier checks serializability on a dling a continuous and ever-growing history. portion of the history. However, the challenge is that the verifier seemingly needs to involve all history, because serializability Despite the computational complexity, there is cause for does not respect real-time ordering, so future transactions can hope: one of the remarkable developments in the field of formal read from values that (in a real-time view) have been over- fence verification has been the use of heuristics to “solve” problems written. To solve this problem, clients issue periodic transactions whose general form is intractable. This owes to major advances (§4.2). The epochs impose coarse-grained syn- in solvers (advanced SAT and SMT solvers) [49, 57, 64, 73, 84, chronization, creating a window from which future reads, if 99, 107, 128], coupled with an explosion of computing power. they are to be serializable, are permitted to read. This allows Thus, our guiding intuition is that it ought to be possible to the verifier to discard transactions prior to the window. verify serializability in many real-world cases. This paper de- We implement cobra (§5) and experiment with it on pro- scribes a system called cobra, which starts from this intuition, duction databases with various workloads (§6). Cobra detects and provides a solution to the problem posed by (a)–(c). all serializability violations we collect from real systems’ bug Cobra applies to transactional key-value stores (everywhere reports. Cobra’s core (single-round) verification improves on × in this paper it says “database”, this is what we mean). Cobra baselines by 10 in the problem size it can handle for a given consists of a third-party, unmodified database that is not as- time budget. For example, cobra finishes checking 10k trans- sumed to “cooperate”; a set of legacy database clients that actions in 14 seconds, whereas baselines can handle only 1k cobra modifies to link to a library; one or more history col- or less in the same time budget. For an online database with lectors that are assumed to record the actual requests to and continuous traffic, cobra achieves a sustainable verification responses from the database; and a verifier that comprehen- throughput of 2k txn/sec on the workloads that we experiment sively checks serializability, in a way that “keeps up” with the with (this corresponds to a workload of 170M/day; for com- database’s (average) load. The database is untrusted while the parison, Apple Pay handles 33M txn/day [6], and Visa handles clients, collectors, and verifier are all in the same trust domain 150M txn/day [33], admittedly for a slightly different notion (for example, deployed by the same organization).
Recommended publications
  • An Opinionated Guide to Technology Frontiers
    TECHNOLOGY RADARVOL. 21 An opinionated guide to technology frontiers thoughtworks.com/radar #TWTechRadar Rebecca Martin Fowler Bharani Erik Evan Parsons (CTO) (Chief Scientist) Subramaniam Dörnenburg Bottcher Fausto Hao Ian James Jonny CONTRIBUTORS de la Torre Xu Cartwright Lewis LeRoy The Technology Radar is prepared by the ThoughtWorks Technology Advisory Board — This edition of the ThoughtWorks Technology Radar is based on a meeting of the Technology Advisory Board in San Francisco in October 2019 Ketan Lakshminarasimhan Marco Mike Neal Padegaonkar Sudarshan Valtas Mason Ford Ni Rachel Scott Shangqi Zhamak Wang Laycock Shaw Liu Dehghani TECHNOLOGY RADAR | 2 © ThoughtWorks, Inc. All Rights Reserved. ABOUT RADAR AT THE RADAR A GLANCE ThoughtWorkers are passionate about ADOPT technology. We build it, research it, test it, 1 open source it, write about it, and constantly We feel strongly that the aim to improve it — for everyone. Our industry should be adopting mission is to champion software excellence these items. We use them and revolutionize IT. We create and share when appropriate on our the ThoughtWorks Technology Radar in projects. HOLD ASSESS support of that mission. The ThoughtWorks TRIAL Technology Advisory Board, a group of senior technology leaders at ThoughtWorks, 2 TRIAL ADOPT creates the Radar. They meet regularly to ADOPT Worth pursuing. It’s 108 discuss the global technology strategy for important to understand how 96 ThoughtWorks and the technology trends TRIAL to build up this capability. ASSESS 1 that significantly impact our industry. Enterprises can try this HOLD 2 technology on a project that The Radar captures the output of the 3 can handle the risk.
    [Show full text]
  • Concurrency Control Basics
    Outline l Introduction/problems, l definitions Introduction/ (transaction, history, conflict, equivalence, Problems serializability, ...), Definitions l locking. Chapter 2: Locking Concurrency Control Basics Klemens Böhm Distributed Data Management: Concurrency Control Basics – 1 Klemens Böhm Distributed Data Management: Concurrency Control Basics – 2 Atomicity, Isolation Synchronisation, Distributed (1) l Transactional guarantees – l Essential feature of databases: in particular, atomicity and isolation. Many users can access the same data concurrently – be it read, be it write. Introduction/ l Atomicity Introduction/ Problems Problems u Example, „bank scenario“: l Consistency must be guaranteed – Definitions Definitions task of synchronization component. Locking Number Person Balance Locking Klemens 5000 l Multi-user mode shall be hidden from users as far as possible: concurrent processing Gunter 200 of requests shall be transparent, u Money transfer – two elementary operations. ‚illusion‘ of being the only user. – debit(Klemens, 500), – credit(Gunter, 500). l Isolation – can be explained with this example, too. l Transactions. Klemens Böhm Distributed Data Management: Concurrency Control Basics – 3 Klemens Böhm Distributed Data Management: Concurrency Control Basics – 4 Synchronisation, Distributed (2) Synchronization in General l Serial execution of application programs Uncontrolled non-serial execution u achieves that illusion leads to other problems, notably inconsistency: l Introduction/ without any synchronization effort, Introduction/
    [Show full text]
  • Table of Contents
    Table of Contents Introduction and Motivation Theoretical Foundations Distributed Programming Languages Distributed Operating Systems Distributed Communication Distributed Data Management Reliability Applications Conclusions Appendix Distributed Operating Systems Key issues Communication primitives Naming and protection Resource management Fault tolerance Services: file service, print service, process service, terminal service, file service, mail service, boot service, gateway service Distributed operating systems vs. network operating systems Commercial and research prototypes Wiselow, Galaxy, Amoeba, Clouds, and Mach Distributed File Systems A file system is a subsystem of an operating system whose purpose is to provide long-term storage. Main issues: Merge of file systems Protection Naming and name service Caching Writing policy Research prototypes: UNIX United, Coda, Andrew (AFS), Frangipani, Sprite, Plan 9, DCE/DFS, and XFS Commercial: Amazon S3, Google Cloud Storage, Microsoft Azure, SWIFT (OpenStack) Distributed Shared Memory A distributed shared memory is a shared memory abstraction what is implemented on a loosely coupled system. Distributed shared memory. Focus 24: Stumm and Zhou's Classification Central-server algorithm (nonmigrating and nonreplicated): central server (Client) Sends a data request to the central server. (Central server) Receives the request, performs data access and sends a response. (Client) Receives the response. Focus 24 (Cont’d) Migration algorithm (migrating and non- replicated): single-read/single-write (Client) If the needed data object is not local, determines the location and then sends a request. (Remote host) Receives the request and then sends the object. (Client) Receives the response and then accesses the data object (read and /or write). Focus 24 (Cont’d) Read-replication algorithm (migrating and replicated): multiple-read/single-write (Client) If the needed data object is not local, determines the location and sends a request.
    [Show full text]
  • Learning Key-Value Store Design
    Learning Key-Value Store Design Stratos Idreos, Niv Dayan, Wilson Qin, Mali Akmanalp, Sophie Hilgard, Andrew Ross, James Lennon, Varun Jain, Harshita Gupta, David Li, Zichen Zhu Harvard University ABSTRACT We introduce the concept of design continuums for the data Key-Value Stores layout of key-value stores. A design continuum unifies major Machine Databases K V K V … K V distinct data structure designs under the same model. The Table critical insight and potential long-term impact is that such unifying models 1) render what we consider up to now as Learning Data Structures fundamentally different data structures to be seen as \views" B-Tree Table of the very same overall design space, and 2) allow \seeing" Graph LSM new data structure designs with performance properties that Store Hash are not feasible by existing designs. The core intuition be- hind the construction of design continuums is that all data Performance structures arise from the very same set of fundamental de- Update sign principles, i.e., a small set of data layout design con- Data Trade-offs cepts out of which we can synthesize any design that exists Access Patterns in the literature as well as new ones. We show how to con- Hardware struct, evaluate, and expand, design continuums and we also Cloud costs present the first continuum that unifies major data structure Read Memory designs, i.e., B+tree, Btree, LSM-tree, and LSH-table. Figure 1: From performance trade-offs to data structures, The practical benefit of a design continuum is that it cre- key-value stores and rich applications.
    [Show full text]
  • Cache Serializability: Reducing Inconsistency in Edge Transactions
    Cache Serializability: Reducing Inconsistency in Edge Transactions Ittay Eyal Ken Birman Robbert van Renesse Cornell University tributed databases. Until recently, technical chal- Abstract—Read-only caches are widely used in cloud lenges have forced such large-system operators infrastructures to reduce access latency and load on to forgo transactional consistency, providing per- backend databases. Operators view coherent caches as object consistency instead, often with some form of impractical at genuinely large scale and many client- facing caches are updated in an asynchronous manner eventual consistency. In contrast, backend systems with best-effort pipelines. Existing solutions that support often support transactions with guarantees such as cache consistency are inapplicable to this scenario since snapshot isolation and even full transactional atom- they require a round trip to the database on every cache icity [9], [4], [11], [10]. transaction. Our work begins with the observation that it can Existing incoherent cache technologies are oblivious to be difficult for client-tier applications to leverage transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel the transactions that the databases provide: trans- caching policy for read-only transactions in which incon- actional reads satisfied primarily from edge caches sistency is tolerable (won’t cause safety violations) but cannot guarantee coherency. Yet, by running from undesirable (has a cost). T-Cache improves cache consis- cache, client-tier transactions shield the backend tency despite asynchronous and unreliable communication database from excessive load, and because caches between the cache and the database. We define cache- are typically placed close to the clients, response serializability, a variant of serializability that is suitable latency can be improved.
    [Show full text]
  • Database Software Market: Billy Fitzsimmons +1 312 364 5112
    Equity Research Technology, Media, & Communications | Enterprise and Cloud Infrastructure March 22, 2019 Industry Report Jason Ader +1 617 235 7519 [email protected] Database Software Market: Billy Fitzsimmons +1 312 364 5112 The Long-Awaited Shake-up [email protected] Naji +1 212 245 6508 [email protected] Please refer to important disclosures on pages 70 and 71. Analyst certification is on page 70. William Blair or an affiliate does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. This report is not intended to provide personal investment advice. The opinions and recommendations here- in do not take into account individual client circumstances, objectives, or needs and are not intended as recommen- dations of particular securities, financial instruments, or strategies to particular clients. The recipient of this report must make its own independent decisions regarding any securities or financial instruments mentioned herein. William Blair Contents Key Findings ......................................................................................................................3 Introduction .......................................................................................................................5 Database Market History ...................................................................................................7 Market Definitions
    [Show full text]
  • Architecting Cloud-Native NET Apps for Azure (2020).Pdf
    EDITION v.1.0 PUBLISHED BY Microsoft Developer Division, .NET, and Visual Studio product teams A division of Microsoft Corporation One Microsoft Way Redmond, Washington 98052-6399 Copyright © 2020 by Microsoft Corporation All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. This book is provided “as-is” and expresses the author’s views and opinions. The views, opinions, and information expressed in this book, including URL and other Internet website references, may change without notice. Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. Microsoft and the trademarks listed at https://www.microsoft.com on the “Trademarks” webpage are trademarks of the Microsoft group of companies. Mac and macOS are trademarks of Apple Inc. The Docker whale logo is a registered trademark of Docker, Inc. Used by permission. All other marks and logos are property of their respective owners. Authors: Rob Vettor, Principal Cloud System Architect/IP Architect - thinkingincloudnative.com, Microsoft Steve “ardalis” Smith, Software Architect and Trainer - Ardalis.com Participants and Reviewers: Cesar De la Torre, Principal Program Manager, .NET team, Microsoft Nish Anil, Senior Program Manager, .NET team, Microsoft Jeremy Likness, Senior Program Manager, .NET team, Microsoft Cecil Phillip, Senior Cloud Advocate, Microsoft Editors: Maira Wenzel, Program Manager, .NET team, Microsoft Version This guide has been written to cover .NET Core 3.1 version along with many additional updates related to the same “wave” of technologies (that is, Azure and additional third-party technologies) coinciding in time with the .NET Core 3.1 release.
    [Show full text]
  • A Theory of Global Concurrency Control in Multidatabase Systems
    VLDB Journal,2, 331-360 (1993), Michael Carey and Patrick Valduriez, Editors 331 t~)VLDB A Theory of Global Concurrency Control in Multidatabase Systems Aidong Zhang and Ahmed K. Elmagarmid Received December 1, 1992; revised version received February 1, 1992; accepted March 15, 1993. Abstract. This article presents a theoretical basis for global concurrency control to maintain global serializability in multidatabase systems. Three correctness criteria are formulated that utilize the intrinsic characteristics of global transactions to de- termine the serialization order of global subtransactions at each local site. In par- ticular, two new types of serializability, chain-conflicting serializability and shar- ing serializability, are proposed and hybrid serializability, which combines these two basic criteria, is discussed. These criteria offer the advantage of imposing no restrictions on local sites other than local serializability while retaining global se- rializability. The graph testing techniques of the three criteria are provided as guidance for global transaction scheduling. In addition, an optimal property of global transactions for determinating the serialization order of global subtransac- tions at local sites is formulated. This property defines the upper limit on global serializability in multidatabase systems. Key Words. Chain-conflicting serializability, sharing serializability, hybrid serial- izability, optimality. 1. Introduction Centralized databases were predominant during the 1970s, a period which saw the development of diverse database systems based on relational, hierarchical, and network models. The advent of applications involving increased cooperation between systems necessitated the development of methods for integrating these pre-existing database systems. The design of such global database systems must allow unified access to these diverse database systems without subjecting them to conversion or major modifications.
    [Show full text]
  • Chapter 14: Concurrency Control
    ChapterChapter 1515 :: ConcurrencyConcurrency ControlControl What is concurrency? • Multiple 'pieces of code' accessing the same data at the same time • Key issue in multi-processor systems (i.e. most computers today) • Key issue for parallel databases • Main question: how do we ensure data stay consistent without sacrificing (too much) performance? Lock-BasedLock-Based ProtocolsProtocols • A lock is a mechanism to control concurrent access to a data item • Data items can be locked in two modes: 1. exclusive (X) mode. Data item can be both read as well as written. X-lock is requested using lock-X instruction. 2. shared (S) mode. Data item can only be read. S-lock is requested using lock-S instruction. • Lock requests are made to concurrency-control manager. Transaction can proceed only after request is granted. Lock-BasedLock-Based ProtocolsProtocols (Cont.)(Cont.) • Lock-compatibility matrix • A transaction may be granted a lock on an item if the requested lock is compatible with locks already held on the item by other transactions. • Any number of transactions can hold shared locks on an item, – but if any transaction holds an exclusive on the item no other transaction may hold any lock on the item. • If a lock cannot be granted, the requesting transaction is made to wait till all incompatible locks held by other transactions have been released. The lock is then granted. Lock-BasedLock-Based ProtocolsProtocols (Cont.)(Cont.) • Example of a transaction performing locking: T2: lock-S(A); read (A); unlock(A); lock-S(B); read (B); unlock(B); display(A+B) • Locking as above is not sufficient to guarantee serializability — if A and B get updated in-between the read of A and B, the displayed sum would be wrong.
    [Show full text]
  • Analysis and Comparison of Concurrency Control Techniques
    ISSN (Online) 2278-1021 ISSN (Print) 2319-5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 4, Issue 3, March 2015 Analysis and Comparison of Concurrency Control Techniques Sonal Kanungo1, Morena Rustom. D2 Smt.Z.S.Patel College Of Computer, Application,Jakat Naka, Surat1 2 Department Of Computer Science, Veer Narmad South Gujarat University, Surat. Abstract: In a shared database system when several transactions are executed simultaneously, the consistency of database should be maintained. The techniques to ensure this consistency are concurrency control techniques. All concurrency-control schemes are based on the serializability property. The serializability properties requires that the data is accessed in a mutually exclusive manner; that means, while one transaction is accessing a data item no other transaction can modify that data item. In this paper we had discussed various concurrency techniques, their advantages and disadvantages and making comparison of optimistic, pessimistic and multiversion techniques. We have simulated the current environment and have analysis the performance of each of these methods. Keywords: Concurrency, Locking, Serializability 1. INTRODUCTION When a transaction takes place the database state is transaction has to wait until all incompatible locks held by changed. In any individual transaction, which is running other transactions are released. The lock is then granted. in isolation, is assumed to be correct. While in shared [1] database several transactions are executes concurrently in 1.1.2 The Two-Phase Locking Protocol the database, the isolation property may no longer be Transaction can always commit by not violating the preserved. To ensure that the system must control the serializability property.
    [Show full text]
  • Where We Are Snapshot Isolation Snapshot Isolation
    Where We Are • ACID properties of transactions CSE 444: Database Internals • Concept of serializability • How to provide serializability with locking • Lowers level of isolation with locking • How to provide serializability with optimistic cc Lectures 16 – Timestamps/Multiversion or Validation Transactions: Snapshot Isolation • Today: lower level of isolation with multiversion cc – Snapshot isolation Magda Balazinska - CSE 444, Spring 2012 1 Magda Balazinska - CSE 444, Spring 2012 2 Snapshot Isolation Snapshot Isolation • Not described in the book, but good overview in Wikipedia • A type of multiversion concurrency control algorithm • Provides yet another level of isolation • Very efficient, and very popular – Oracle, PostgreSQL, SQL Server 2005 • Prevents many classical anomalies BUT… • Not serializable (!), yet ORACLE and PostgreSQL use it even for SERIALIZABLE transactions! – But “serializable snapshot isolation” now in PostgreSQL Magda Balazinska - CSE 444, Fall 2010 3 Magda Balazinska - CSE 444, Fall 2010 4 Snapshot Isolation Rules Snapshot Isolation (Details) • Multiversion concurrency control: • Each transactions receives a timestamp TS(T) – Versions of X: Xt1, Xt2, Xt3, . • Transaction T sees snapshot at time TS(T) of the database • When T reads X, return XTS(T). • When T commits, updated pages are written to disk • When T writes X: if other transaction updated X, abort – Not faithful to “first committer” rule, because the other transaction U might have committed after T. But once we abort • Write/write conflicts resolved by “first
    [Show full text]
  • An Evaluation of Distributed Concurrency Control
    An Evaluation of Distributed Concurrency Control Rachael Harding Dana Van Aken MIT CSAIL Carnegie Mellon University [email protected] [email protected] Andrew Pavlo Michael Stonebraker Carnegie Mellon University MIT CSAIL [email protected] [email protected] ABSTRACT there is little understanding of the trade-offs in a modern cloud Increasing transaction volumes have led to a resurgence of interest computing environment offering high scalability and elasticity. Few in distributed transaction processing. In particular, partitioning data of the recent publications that propose new distributed protocols across several servers can improve throughput by allowing servers compare more than one other approach. For example, none of the to process transactions in parallel. But executing transactions across papers published since 2012 in Table 1 compare against timestamp- servers limits the scalability and performance of these systems. based or multi-version protocols, and seven of them do not compare In this paper, we quantify the effects of distribution on concur- to any other serializable protocol. As a result, it is difficult to rency control protocols in a distributed environment. We evaluate six compare proposed protocols, especially as hardware and workload classic and modern protocols in an in-memory distributed database configurations vary across publications. evaluation framework called Deneva, providing an apples-to-apples Our aim is to quantify and compare existing distributed concur- comparison between each. Our results expose severe limitations of rency control protocols for in-memory DBMSs. We develop an distributed transaction processing engines. Moreover, in our anal- empirical understanding of the behavior of distributed transactions ysis, we identify several protocol-specific scalability bottlenecks.
    [Show full text]