Fast Transactions in Distributed and Highly Available Databases Yi Lu
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Big Data Velocity in Plain English
Big Data Velocity in Plain English John Ryan Data Warehouse Solution Architect Table of Contents The Requirement . 1 What’s the Problem? . .. 2 Components Needed . 3 Data Capture . 3 Transformation . 3 Storage and Analytics . 4 The Traditional Solution . 6 The NewSQL Based Solution . 7 NewSQL Advantage . 9 Thank You. 10 About the Author . 10 ii The Requirement The assumed requirement is the ability to capture, transform and analyse data at potentially massive velocity in real time. This involves capturing data from millions of customers or electronic sensors, and transforming and storing the results for real time analysis on dashboards. The solution must minimise latency — the delay between a real world event and it’s impact upon a dashboard, to under a second. Typical applications include: • Monitoring Machine Sensors: Using embedded sensors in industrial machines or vehicles — typically referred to as The Internet of Things (IoT) . For example Progressive Insurance use real time speed and vehicle braking data to help classify accident risk and deliver appropriate discounts. Similar technology is used by logistics giant FedEx which uses SenseAware technology to provide near real-time parcel tracking. • Fraud Detection: To assess the risk of credit card fraud prior to authorising or declining the transaction. This can be based upon a simple report of a lost or stolen card, or more likely, an analysis of aggregate spending behaviour, aligned with machine learning techniques. • Clickstream Analysis: Producing real time analysis of user web site clicks to dynamically deliver pages, recommended products or services, or deliver individually targeted advertising. Big Data: Velocity in Plain English eBook 1 What’s the Problem? The primary challenge for real time systems architects is the potentially massive throughput required which could exceed a million transactions per second. -
Quantum Dxi4500 User's Guide
User’s Guide User’s Guide User’s Guide User’s Guide QuantumQuantum DXDXii45004500 DX i -Series 6-66904-01 Rev C DXi4500 User’s Guide, 6-66904-01 Rev C, August 2010, Product of USA. Quantum Corporation provides this publication “as is” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability or fitness for a particular purpose. Quantum Corporation may revise this publication from time to time without notice. COPYRIGHT STATEMENT © 2010 Quantum Corporation. All rights reserved. Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law. TRADEMARK STATEMENT Quantum, the Quantum logo, DLT, DLTtape, the DLTtape logo, Scalar, and StorNext are registered trademarks of Quantum Corporation, registered in the U.S. and other countries. Backup. Recovery. Archive. It’s What We Do., the DLT logo, DLTSage, DXi, DXi-Series, Dynamic Powerdown, FastSense, FlexLink, GoVault, MediaShield, Optyon, Pocket-sized. Well-armored, SDLT, SiteCare, SmartVerify, StorageCare, Super DLTtape, SuperLoader, and Vision are trademarks of Quantum. LTO and Ultrium are trademarks of HP, IBM, and Quantum in the U.S. and other countries. All other trademarks are the property of their respective companies. Specifications are subject to change without notice. ii Quantum DXi4500 User’s Guide Contents Preface xv Chapter 1 DXi4500 System Description 1 Overview . 1 Features and Benefits . 3 Data Reduction . 3 Data Deduplication . 3 Compression. 4 Space Reclamation . 4 Remote Replication . 5 DXi4500 System . -
Replication Using Group Communication Over a Partitioned Network
Replication Using Group Communication Over a Partitioned Network Thesis submitted for the degree “Doctor of Philosophy” Yair Amir Submitted to the Senate of the Hebrew University of Jerusalem (1995). This work was carried out under the supervision of Professor Danny Dolev ii Acknowledgments I am deeply grateful to Danny Dolev, my advisor and mentor. I thank Danny for believing in my research, for spending so many hours on it, and for giving it the theoretical touch. His warm support and patient guidance helped me through. I hope I managed to adopt some of his professional attitude and integrity. I thank Daila Malki for her help during the early stages of the Transis project. Thanks to Idit Keidar for helping me sharpen some of the issues of the replication server. I enjoyed my collaboration with Ofir Amir on developing the coloring model of the replication server. Many thanks to Roman Vitenberg for his valuable insights regarding the extended virtual synchrony model and the replication algorithm. I benefited a lot from many discussions with Ahmad Khalaila regarding distributed systems and other issues. My thanks go to David Breitgand, Gregory Chokler, Yair Gofen, Nabil Huleihel and Rimon Orni, for their contribution to the Transis project and to my research. I am grateful to Michael Melliar-Smith and Louise Moser from the Department of Electrical and Computer Engineering, University of California, Santa Barbara. During two summers, several mutual visits and extensive electronic correspondence, Louise and Michael were involved in almost every aspect of my research, and unofficially served as my co-advisors. The work with Deb Agarwal and Paul Ciarfella on the Totem protocol contributed a lot to my understanding of high-speed group communication. -
ACID Compliant Distributed Key-Value Store
ACID Compliant Distributed Key-Value Store #1 #2 #3 Lakshmi Narasimhan Seshan , Rajesh Jalisatgi , Vijaeendra Simha G A # 1l [email protected] # 2r [email protected] #3v [email protected] Abstract thought that would be easier to implement. All Built a fault-tolerant and strongly-consistent key/value storage components run http and grpc endpoints. Http endpoints service using the existing RAFT implementation. Add ACID are for debugging/configuration. Grpc Endpoints are transaction capability to the service using a 2PC variant. Build used for communication between components. on the raftexample[1] (available as part of etcd) to add atomic Replica Manager (RM): RM is the service discovery transactions. This supports sharding of keys and concurrent transactions on sharded KV Store. By Implementing a part of the system. RM is initiated with each of the transactional distributed KV store, we gain working knowledge Replica Servers available in the system. Users have to of different protocols and complexities to make it work together. instantiate the RM with the number of shards that the user wants the key to be split into. Currently we cannot Keywords— ACID, Key-Value, 2PC, sharding, 2PL Transaction dynamically change while RM is running. New Replica Servers cannot be added or removed from once RM is I. INTRODUCTION up and running. However Replica Servers can go down Distributed KV stores have become a norm with the and come back and RM can detect this. Each Shard advent of microservices architecture and the NoSql Leader updates the information to RM, while the DTM revolution. Initially KV Store discarded ACID properties leader updates its information to the RM. -
Lecture 15: March 28 15.1 Overview 15.2 Transactions
CMPSCI 677 Distributed and Operating Systems Spring 2019 Lecture 15: March 28 Lecturer: Prashant Shenoy 15.1 Overview This section covers the following topics: Distributed Transactions: ACID Properties, Concurrency Control, Serializability, Two-Phase Locking 15.2 Transactions Transactions are used widely in databases as they provide atomicity (all or nothing property). Atomicity is the property in which operations inside a transaction all execute successfully or none of the operations execute. Thus, if a process crashes during a transaction, all operations that happened during that transaction must be undone. Transactions are usually implemented using locks. Without locks there would be interleaving of operations from two transactions, causing overwrites and inconsistencies. It has to look like all instructions/operations of one transaction are executed at once, while ensuring other external instructions/operations are not executing at the same time. This is especially important in real-world applications such as banking applications. 15.2.1 ACID: Most transaction systems will provide the following properties • Atomicity: All or nothing (all instructions execute or none; cannot have some instructions execute and some do not) • Consistency: Transaction takes system from one consistent (valid) state to another • Isolated: Transaction executes as if it is the only transaction in the system. Ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially (serializability). -
F1 Query: Declarative Querying at Scale
F1 Query: Declarative Querying at Scale Bart Samwel John Cieslewicz Ben Handy Jason Govig Petros Venetis Chanjun Yang Keith Peters Jeff Shute Daniel Tenedorio Himani Apte Felix Weigel David Wilhite Jiacheng Yang Jun Xu Jiexing Li Zhan Yuan Craig Chasseur Qiang Zeng Ian Rae Anurag Biyani Andrew Harn Yang Xia Andrey Gubichev Amr El-Helw Orri Erling Zhepeng Yan Mohan Yang Yiqun Wei Thanh Do Colin Zheng Goetz Graefe Somayeh Sardashti Ahmed M. Aly Divy Agrawal Ashish Gupta Shiv Venkataraman Google LLC [email protected] ABSTRACT 1. INTRODUCTION F1 Query is a stand-alone, federated query processing platform The data processing and analysis use cases in large organiza- that executes SQL queries against data stored in different file- tions like Google exhibit diverse requirements in data sizes, la- based formats as well as different storage systems at Google (e.g., tency, data sources and sinks, freshness, and the need for custom Bigtable, Spanner, Google Spreadsheets, etc.). F1 Query elimi- business logic. As a result, many data processing systems focus on nates the need to maintain the traditional distinction between dif- a particular slice of this requirements space, for instance on either ferent types of data processing workloads by simultaneously sup- transactional-style queries, medium-sized OLAP queries, or huge porting: (i) OLTP-style point queries that affect only a few records; Extract-Transform-Load (ETL) pipelines. Some systems are highly (ii) low-latency OLAP querying of large amounts of data; and (iii) extensible, while others are not. Some systems function mostly as a large ETL pipelines. F1 Query has also significantly reduced the closed silo, while others can easily pull in data from other sources. -
Containers at Google
Build What’s Next A Google Cloud Perspective Thomas Lichtenstein Customer Engineer, Google Cloud [email protected] 7 Cloud products with 1 billion users Google Cloud in DACH HAM BER ● New cloud region Germany Google Cloud Offices FRA Google Cloud Region (> 50% latency reduction) 3 Germany with 3 zones ● Commitment to GDPR MUC VIE compliance ZRH ● Partnership with MUC IoT platform connects nearly Manages “We found that Google Ads has the best system for 50 brands 250M+ precisely targeting customer segments in both the B2B with thousands of smart data sets per week and 3.5M and B2C spaces. It used to be hard to gain the right products searches per month via IoT platform insights to accurately measure our marketing spend and impacts. With Google Analytics, we can better connect the omnichannel customer journey.” Conrad is disrupting online retail with new Aleš Drábek, Chief Digital and Disruption Officer, Conrad Electronic services for mobility and IoT-enabled devices. Solution As Conrad transitions from a B2C retailer to an advanced B2B and Supports B2C platform for electronic products, it is using Google solutions to grow its customer base, develop on a reliable cloud infrastructure, Supports and digitize its workplaces and retail stores. Products Used 5x Mobile-First G Suite, Google Ads, Google Analytics, Google Chrome Enterprise, Google Chromebooks, Google Cloud Translation API, Google Cloud the IoT connections vs. strategy Vision API, Google Home, Apigee competitors Industry: Retail; Region: EMEA Number of Automate Everything running -
Replication Vs. Caching Strategies for Distributed Metadata Management in Cloud Storage Systems
ShardFS vs. IndexFS: Replication vs. Caching Strategies for Distributed Metadata Management in Cloud Storage Systems Lin Xiao, Kai Ren, Qing Zheng, Garth A. Gibson Carnegie Mellon University {lxiao, kair, zhengq, garth}@cs.cmu.edu Abstract 1. Introduction The rapid growth of cloud storage systems calls for fast and Modern distributed storage systems commonly use an archi- scalable namespace processing. While few commercial file tecture that decouples metadata access from file read and systems offer anything better than federating individually write operations [21, 25, 46]. In such a file system, clients non-scalable namespace servers, a recent academic file sys- acquire permission and file location information from meta- tem, IndexFS, demonstrates scalable namespace processing data servers before accessing file contents, so slower meta- based on client caching of directory entries and permissions data service can degrade performance for the data path. Pop- (directory lookup state) with no per-client state in servers. In ular new distributed file systems such as HDFS [25] and the this paper we explore explicit replication of directory lookup first generation of the Google file system [21] have used cen- state in all servers as an alternative to caching this infor- tralized single-node metadata services and focused on scal- mation in all clients. Both eliminate most repeated RPCs to ing only the data path. However, the single-node metadata different servers in order to resolve hierarchical permission server design limits the scalability of the file system in terms tests. Our realization for server replicated directory lookup of the number of objects stored and concurrent accesses to state, ShardFS, employs a novel file system specific hybrid the file system [40]. -
Spanner: Becoming a SQL System
Spanner: Becoming a SQL System David F. Bacon Nathan Bales Nico Bruno Brian F. Cooper Adam Dickinson Andrew Fikes Campbell Fraser Andrey Gubarev Milind Joshi Eugene Kogan Alexander Lloyd Sergey Melnik Rajesh Rao David Shue Christopher Taylor Marcel van der Holst Dale Woodford Google, Inc. ABSTRACT this paper, we focus on the “database system” aspects of Spanner, Spanner is a globally-distributed data management system that in particular how query execution has evolved and forced the rest backs hundreds of mission-critical services at Google. Spanner of Spanner to evolve. Most of these changes have occurred since is built on ideas from both the systems and database communi- [5] was written, and in many ways today’s Spanner is very different ties. The first Spanner paper published at OSDI’12 focused on the from what was described there. systems aspects such as scalability, automatic sharding, fault tol- A prime motivation for this evolution towards a more “database- erance, consistent replication, external consistency, and wide-area like” system was driven by the experiences of Google developers distribution. This paper highlights the database DNA of Spanner. trying to build on previous “key-value” storage systems. The pro- We describe distributed query execution in the presence of reshard- totypical example of such a key-value system is Bigtable [4], which ing, query restarts upon transient failures, range extraction that continues to see massive usage at Google for a variety of applica- drives query routing and index seeks, and the improved blockwise- tions. However, developers of many OLTP applications found it columnar storage format. -
Replication As a Prelude to Erasure Coding in Data-Intensive Scalable Computing
DiskReduce: Replication as a Prelude to Erasure Coding in Data-Intensive Scalable Computing Bin Fan Wittawat Tantisiriroj Lin Xiao Garth Gibson fbinfan , wtantisi , lxiao , garth.gibsong @ cs.cmu.edu CMU-PDL-11-112 October 2011 Parallel Data Laboratory Carnegie Mellon University Pittsburgh, PA 15213-3890 Acknowledgements: We would like to thank several people who made significant contributions. Robert Chansler, Raj Merchia and Hong Tang from Yahoo! provide various help. Dhruba Borthakur from Facebook provides statistics and feedback. Bin Fu and Brendan Meeder gave us their scientific applications and data-sets for experimental evaluation. The work in this paper is based on research supported in part by the Betty and Gordon Moore Foundation, by the Department of Energy, under award number DE-FC02-06ER25767, by the Los Alamos National Laboratory, by the Qatar National Research Fund, under award number NPRP 09-1116-1-172, under contract number 54515- 001-07, by the National Science Foundation under awards CCF-1019104, CCF-0964474, OCI-0852543, CNS-0546551 and SCI-0430781, and by Google and Yahoo! research awards. We also thank the members and companies of the PDL Consortium (including APC, EMC, Facebook, Google, Hewlett-Packard, Hitachi, IBM, Intel, Microsoft, NEC, NetApp, Oracle, Panasas, Riverbed, Samsung, Seagate, STEC, Symantec, and VMware) for their interest, insights, feedback, and support. Keywords: Distributed File System, RAID, Cloud Computing, DISC Abstract The first generation of Data-Intensive Scalable Computing file systems such as Google File System and Hadoop Distributed File System employed n (n ≥ 3) replications for high data reliability, therefore delivering users only about 1=n of the total storage capacity of the raw disks. -
Distributed Transactions
Distributed Systems 600.437 Distributed Transactions Department of Computer Science The Johns Hopkins University Yair Amir Fall 16/ Lecture 6 1 Distributed Transactions Lecture 6 Further reading: Concurrency Control and Recovery in Database Systems P. A. Bernstein, V. Hadzilacos, and N. Goodman, Addison Wesley. 1987. Transaction Processing: Concepts and Techniques Jim Gray & Andreas Reuter, Morgan Kaufmann Publishers, 1993. Yair Amir Fall 16/ Lecture 6 2 Transaction Processing System Clients Messages to TPS outside world Real actions (firing a missile) Database Yair Amir Fall 16/ Lecture 6 3 Basic Definition Transaction - a collection of operations on the physical and abstract application state, with the following properties: • Atomicity. • Consistency. • Isolation. • Durability. The ACID properties of a transaction. Yair Amir Fall 16/ Lecture 6 4 Atomicity Changes to the state are atomic: - A jump from the initial state to the result state without any observable intermediate state. - All or nothing ( Commit / Abort ) semantics. - Changes include: - Database changes. - Messages to outside world. - Actions on transducers. (testable / untestable) Yair Amir Fall 16/ Lecture 6 5 Consistency - The transaction is a correct transformation of the state. This means that the transaction is a correct program. Yair Amir Fall 16/ Lecture 6 6 Isolation Even though transactions execute concurrently, it appears to the outside observer as if they execute in some serial order. Isolation is required to guarantee consistent input, which is needed for a consistent program to provide consistent output. Yair Amir Fall 16/ Lecture 6 7 Durability - Once a transaction completes successfully (commits), its changes to the state survive failures (what is the failure model ? ). -
PS Non-Standard Database Systems Overview
PS Non-Standard Database Systems Overview Daniel Kocher March 2019 is document gives a brief overview on three categories of non-standard database systems (DBS) in the context of this class. For each category, we provide you with a motivation, a discussion about the key properties, a list of commonly used implementations (proprietary and open-source/freely-available), and recommendation(s) for your project. For your convenience, we also recap some relevant terms (e.g., ACID, OLTP, ...). Terminology OLTP Online transaction processing. OLTP systems are highly optimized to process (1) short queries (operating only on a small portion of the database) and (2) small transac- tions (inserting/updating few tuples per relation). Typically, OLTP includes insertions, updates, and deletions. e main focus of OLTP systems is on low response time and high throughput (in terms of transactions per second). Databases in OLTP systems are highly normalized [5]. OLAP Online analytical processing. OLAP systems analyze complex data (usually from a data warehouse). Typically, queries in an OLAP system may run for a long time (as opposed to queries in OLTP systems) and the number of transactions is usually low. Fur- thermore, typical OLAP queries are read-only and touch a large portion of the database (e.g., scan or join large tables). e main focus of OLAP systems is on low response time. Databases in OLAP systems are oen de-normalized intentionally [5]. ACID Traditional relational database systems guarantee ACID properties: – Atomicity All or no operation(s) of a transaction are reected in the database. – Consistency Isolated execution of a transaction preserves consistency of the database.