Making Transactional Key-Value Stores Verifiably
Total Page:16
File Type:pdf, Size:1020Kb
Cobra: Making Transactional Key-Value Stores Verifiably Serializable Cheng Tan, Changgeng Zhao, Shuai Mu?, and Michael Walfish NYU Department of Computer Science, Courant Institute ?Stony Brook University Abstract. Today’s cloud databases offer strong properties, of its operation. Meanwhile, any internal corruption—as could including serializability, sometimes called the gold standard happen from misconfiguration, operational error, compromise, database correctness property. But cloud databases are compli- or adversarial control at any layer of the execution stack—can cated black boxes, running in a different administrative domain cause a serializability violation. Beyond that, one need not from their clients. Thus, clients might like to know whether adopt a paranoid stance (“the cloud as malicious adversary”) the databases are meeting their contract. To that end, we intro- to acknowledge that it is difficult, as a technical matter, to pro- duce cobra; cobra applies to transactional key-value stores. vide serializability and geo-distribution and geo-replication It is the first system that combines (a) black-box checking, of and high performance under various failures [40, 78, 147]. (b) serializability, while (c) scaling to real-world online trans- Doing so usually involves a consensus protocol that inter- actional processing workloads. The core technical challenge acts with an atomic commit protocol [69, 96, 103]—a com- is that the underlying search problem is computationally ex- plex combination, and hence potentially bug-prone. Indeed, pensive. Cobra tames that problem by starting with a suitable today’s production systems have exhibited serializability vio- SMT solver. Cobra then introduces several new techniques, lations [1, 18, 19, 25, 26] (see also §6.1). including a new encoding of the validity condition; hardware This leads to our core question: how can clients verify the acceleration to prune inputs to the solver; and a transaction serializability of a black-box database? To be clear, related segmentation mechanism that enables scaling and garbage col- questions have been addressed before. The novelty in our prob- lection. Cobra imposes modest overhead on clients, improves lem is in combining three aspects: over baselines by 10× in verification cost, and (unlike the base- (a) Black box, unmodified database. In our setting, the lines) supports continuous verification. Our artifact can handle database does not “know” it’s being checked; the input to the 2000 transactions/sec, equivalent to 170M/day. verification machinery will be only the inputs to, and outputs from, the database. This matches the cloud context (even when 1 Introduction and motivation the database is open source, as noted above), and contrasts with work that checks for isolation or consistency anomalies by A new class of cloud databases has emerged, including Ama- using “inside information” [62, 86, 109, 123, 130, 141, 143], zon DynamoDB and Aurora [2, 4, 133], Azure CosmosDB [7], for example, access to internal scheduling choices. Also, we CockroachDB [9], YugaByte DB [36], and others [16, 17, 21, target production workloads and standard key-value APIs (§2). 22, 69]. Compared to earlier generations of NoSQL databases (b) Serializability. We focus on serializability, in contrast to (such as Facebook Cassandra, Google Bigtable, and Amazon weaker isolation levels. Serializability has a strict variant and a S3), members of the new class offer the same scalability, avail- non-strict variant [56, 110]; in the former, the effective transac- ability, replication, and geo-distribution but in addition offer tion order must be consistent with real time. We attend to both serializable transactions [55, 110]: all transactions appear to variants in this paper. However, the weight is on the non-strict execute in a single, sequential order. variant, as it poses a more difficult computational problem; Serializability is the gold-standard isolation level [48, 77], the strict variant is “easier” because the real-time constraint and the correctness contract that many applications and pro- diminishes the space of potentially-valid execution schedules. grammers implicitly assume: their code would be incorrect if On the one hand, the majority of databases that offer seri- the database provided a weaker contract [137]. Note that serial- alizability offer the strict variant. On the other hand, check- izability encompasses weaker notions of correctness, like basic ing non-strict serializability is germane, for two reasons. First, integrity: if a returned value does not read from a valid write, some databases claim to provide the non-strict variant (in that will manifest as a non-serializable result. Serializability general [11], or under clock skew [35], or for read-only work- also implies that the database handles failures robustly: non- loads [32]), while others don’t specify the variant [3, 5]. Sec- tolerated server failures, particularly in the case of a distributed ond, the strict case can degenerate to the non-strict case. Heavy database, are a potential source of non-serializable results. concurrency, for example, means few real-time constraints, so However, a user of a cloud database can legitimately wonder the difficult computational problem re-enters. As a special whether the database in fact provides the promised contract. For case, clock drift causes otherwise ordered transactions to be one thing, users often have no visibility into a cloud database’s concurrent (§3.5,§6.1). implementation. In fact, even when the source code is avail- able [9, 16, 17, 36], that does not necessarily yield visibility: if (c) Scalability. This means, first, scaling to real-world online the database is hosted by someone else, you can’t really be sure transactional processing workloads at reasonable cost. It also means incorporating mechanisms that enable a verifier to work efficiently infer ordering relationships from a history (§3.1– incrementally and to keep up with an ever-growing history. §3.2). (We prove that cobra’s encoding is a valid reduction in However, aspects (a) and (b) set up a challenge: check- Appendix B [132].) Second, cobra uses parallel hardware (our ing black-box serializability has long been known to be implementation uses GPUs; §5) to compute all-pairs reach- NP-complete [54, 110]. Recent work of Biswas and Enea ability over a graph whose nodes are transactions and whose (BE) [59] lowered the complexity to polynomial time, under edges are known precedence relationships; then, cobra re- natural restrictions (which hold in our context); see also pio- solves some of the constraints efficiently, by testing whether a neering work by Sinha et al. [124] (§7). However, these two candidate edge would generate a cycle with an existing path. approaches don’t meet our goal of scalability. For example, 2. Scaling to a continuous and ever-growing history (§4). in BE, the number of clients appears in the exponent of the Online cloud databases run in a continuous fashion, where the algorithm’s running time (§6,§7) (e.g., 14 clients means the corresponding history is uninterrupted and grows unbound- algorithm is O(n14)). Furthermore, even if there were a small edly. To support online databases, cobra verifies in rounds. number of clients, BE does not include mechanisms for han- From round-to-round, the verifier checks serializability on a dling a continuous and ever-growing history. portion of the history. However, the challenge is that the verifier seemingly needs to involve all history, because serializability Despite the computational complexity, there is cause for does not respect real-time ordering, so future transactions can hope: one of the remarkable developments in the field of formal read from values that (in a real-time view) have been over- fence verification has been the use of heuristics to “solve” problems written. To solve this problem, clients issue periodic transactions whose general form is intractable. This owes to major advances (§4.2). The epochs impose coarse-grained syn- in solvers (advanced SAT and SMT solvers) [49, 57, 64, 73, 84, chronization, creating a window from which future reads, if 99, 107, 128], coupled with an explosion of computing power. they are to be serializable, are permitted to read. This allows Thus, our guiding intuition is that it ought to be possible to the verifier to discard transactions prior to the window. verify serializability in many real-world cases. This paper de- We implement cobra (§5) and experiment with it on pro- scribes a system called cobra, which starts from this intuition, duction databases with various workloads (§6). Cobra detects and provides a solution to the problem posed by (a)–(c). all serializability violations we collect from real systems’ bug Cobra applies to transactional key-value stores (everywhere reports. Cobra’s core (single-round) verification improves on × in this paper it says “database”, this is what we mean). Cobra baselines by 10 in the problem size it can handle for a given consists of a third-party, unmodified database that is not as- time budget. For example, cobra finishes checking 10k trans- sumed to “cooperate”; a set of legacy database clients that actions in 14 seconds, whereas baselines can handle only 1k cobra modifies to link to a library; one or more history col- or less in the same time budget. For an online database with lectors that are assumed to record the actual requests to and continuous traffic, cobra achieves a sustainable verification responses from the database; and a verifier that comprehen- throughput of 2k txn/sec on the workloads that we experiment sively checks serializability, in a way that “keeps up” with the with (this corresponds to a workload of 170M/day; for com- database’s (average) load. The database is untrusted while the parison, Apple Pay handles 33M txn/day [6], and Visa handles clients, collectors, and verifier are all in the same trust domain 150M txn/day [33], admittedly for a slightly different notion (for example, deployed by the same organization).