Foundationdb Technical Overview V2

Total Page:16

File Type:pdf, Size:1020Kb

Foundationdb Technical Overview V2 Evan Tschannen !1 Evan Tschannen • Worked on FoundationDB for 8 years • Touched every core component of the database • Recently added multi-region asynchronous replication • https://www.foundationdb.org • https://github.com/apple/foundationdb • Open Source, Apache License (v2) Why are there so many databases? !4 Why are there so many databases? • Tension between performance, scalability, and consistency • A wide variety of data models !5 FoundationDB Performance • Organizes many instances of a single process database into a single database • Read and write throughput 90% of the aggregate individual read and write throughput • Single hop read latencies • Four hop write latencies FoundationDB is a… • Highly performant, • Scalable, • Ordered key value store, • That supports ACID transactions. Servers A Writer Reader Wait for 2 of 3 responses B Wait for 2 of 3 responses C !8 Servers A Writer Reader Wait for 2 of 3 responses B Wait for 2 of 3 responses C !9 Servers A Writer Reader Wait for 3 of 3 responses B Wait for 1 of 3 responses C !10 Servers A Writer Reader Wait for 3 of 3 responses B Wait for 1 of 3 responses C !11 0: A,B,C 101: B,C,D Versions (Stored in a different database) Servers A Writer Reader Wait for 3 of 3 responses B Wait for 1 of 3 responses C D !12 Coordinators ZooKeeper (Failure Handling) Storage Servers Transaction Logs SQLite (Reads) FoundationDB (Writes) !13 Coordinators Cluster Controller Storage Servers Transaction Logs !14 Coordinators Cluster Controller Storage Servers Transaction Logs !15 FoundationDB API Versions 100 A : apple Get Read Version (200) 200 Read (A, 200) Write (B, banana) Commit 400 !16 FoundationDB API Versions 100 A : apple Get Read Version (200) 200 Read (A, 200) 300 A : orange Write (B, banana) Commit 400 !17 Coordinators Cluster Controller Master Storage Servers Proxies Resolvers Transaction Logs !18 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !19 Coordinators Get Read Version (200) Reads (A, 200) Cluster Controller Writes (B : banana) Commit Master Data Distribution Storage Servers Proxies Resolvers Transaction Logs !20 Coordinators Get Read Version (200) Reads (A, 200) Cluster Controller Writes (B : banana) Commit Master (400) Storage Servers Proxies Resolvers Transaction Logs !21 Coordinators Get Read Version (200) Reads (A, 200) Cluster Controller Writes (B : banana) Commit Master (400) Storage Servers Proxies Resolvers Transaction Logs !22 Coordinators Get Read Version (200) Reads (A, 200) Cluster Controller Writes (B : banana) Commit Master Storage Servers Proxies Resolvers Transaction Logs !23 Coordinators Get Read Version (200) Reads (A, 200) Cluster Controller Writes (B : banana) Commit (400) Master Storage Servers Proxies Resolvers Transaction Logs !24 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !25 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs (400) (300) (200) !26 Coordinators Get Read Version (400) Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs (400) (300) (200) !27 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !28 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !29 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !30 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Data Distribution Storage Servers Proxies Resolvers Transaction Logs !31 Coordinators Get Read Version Reads Cluster Controller Writes Commit Master Storage Servers Proxies Resolvers Transaction Logs !32 FoundationDB Performance • Organizes many instances of a single process database into a single database • Read and write throughput 90% of the aggregate individual read and write throughput • Single hop read latencies • Four hop write latencies Does it work? • Run the entire distributed database in a single process • Inject random failures • Check correctness • Reproduce errors https://www.foundationdb.org !35.
Recommended publications
  • An Opinionated Guide to Technology Frontiers
    TECHNOLOGY RADARVOL. 21 An opinionated guide to technology frontiers thoughtworks.com/radar #TWTechRadar Rebecca Martin Fowler Bharani Erik Evan Parsons (CTO) (Chief Scientist) Subramaniam Dörnenburg Bottcher Fausto Hao Ian James Jonny CONTRIBUTORS de la Torre Xu Cartwright Lewis LeRoy The Technology Radar is prepared by the ThoughtWorks Technology Advisory Board — This edition of the ThoughtWorks Technology Radar is based on a meeting of the Technology Advisory Board in San Francisco in October 2019 Ketan Lakshminarasimhan Marco Mike Neal Padegaonkar Sudarshan Valtas Mason Ford Ni Rachel Scott Shangqi Zhamak Wang Laycock Shaw Liu Dehghani TECHNOLOGY RADAR | 2 © ThoughtWorks, Inc. All Rights Reserved. ABOUT RADAR AT THE RADAR A GLANCE ThoughtWorkers are passionate about ADOPT technology. We build it, research it, test it, 1 open source it, write about it, and constantly We feel strongly that the aim to improve it — for everyone. Our industry should be adopting mission is to champion software excellence these items. We use them and revolutionize IT. We create and share when appropriate on our the ThoughtWorks Technology Radar in projects. HOLD ASSESS support of that mission. The ThoughtWorks TRIAL Technology Advisory Board, a group of senior technology leaders at ThoughtWorks, 2 TRIAL ADOPT creates the Radar. They meet regularly to ADOPT Worth pursuing. It’s 108 discuss the global technology strategy for important to understand how 96 ThoughtWorks and the technology trends TRIAL to build up this capability. ASSESS 1 that significantly impact our industry. Enterprises can try this HOLD 2 technology on a project that The Radar captures the output of the 3 can handle the risk.
    [Show full text]
  • Table of Contents
    Table of Contents Introduction and Motivation Theoretical Foundations Distributed Programming Languages Distributed Operating Systems Distributed Communication Distributed Data Management Reliability Applications Conclusions Appendix Distributed Operating Systems Key issues Communication primitives Naming and protection Resource management Fault tolerance Services: file service, print service, process service, terminal service, file service, mail service, boot service, gateway service Distributed operating systems vs. network operating systems Commercial and research prototypes Wiselow, Galaxy, Amoeba, Clouds, and Mach Distributed File Systems A file system is a subsystem of an operating system whose purpose is to provide long-term storage. Main issues: Merge of file systems Protection Naming and name service Caching Writing policy Research prototypes: UNIX United, Coda, Andrew (AFS), Frangipani, Sprite, Plan 9, DCE/DFS, and XFS Commercial: Amazon S3, Google Cloud Storage, Microsoft Azure, SWIFT (OpenStack) Distributed Shared Memory A distributed shared memory is a shared memory abstraction what is implemented on a loosely coupled system. Distributed shared memory. Focus 24: Stumm and Zhou's Classification Central-server algorithm (nonmigrating and nonreplicated): central server (Client) Sends a data request to the central server. (Central server) Receives the request, performs data access and sends a response. (Client) Receives the response. Focus 24 (Cont’d) Migration algorithm (migrating and non- replicated): single-read/single-write (Client) If the needed data object is not local, determines the location and then sends a request. (Remote host) Receives the request and then sends the object. (Client) Receives the response and then accesses the data object (read and /or write). Focus 24 (Cont’d) Read-replication algorithm (migrating and replicated): multiple-read/single-write (Client) If the needed data object is not local, determines the location and sends a request.
    [Show full text]
  • Learning Key-Value Store Design
    Learning Key-Value Store Design Stratos Idreos, Niv Dayan, Wilson Qin, Mali Akmanalp, Sophie Hilgard, Andrew Ross, James Lennon, Varun Jain, Harshita Gupta, David Li, Zichen Zhu Harvard University ABSTRACT We introduce the concept of design continuums for the data Key-Value Stores layout of key-value stores. A design continuum unifies major Machine Databases K V K V … K V distinct data structure designs under the same model. The Table critical insight and potential long-term impact is that such unifying models 1) render what we consider up to now as Learning Data Structures fundamentally different data structures to be seen as \views" B-Tree Table of the very same overall design space, and 2) allow \seeing" Graph LSM new data structure designs with performance properties that Store Hash are not feasible by existing designs. The core intuition be- hind the construction of design continuums is that all data Performance structures arise from the very same set of fundamental de- Update sign principles, i.e., a small set of data layout design con- Data Trade-offs cepts out of which we can synthesize any design that exists Access Patterns in the literature as well as new ones. We show how to con- Hardware struct, evaluate, and expand, design continuums and we also Cloud costs present the first continuum that unifies major data structure Read Memory designs, i.e., B+tree, Btree, LSM-tree, and LSH-table. Figure 1: From performance trade-offs to data structures, The practical benefit of a design continuum is that it cre- key-value stores and rich applications.
    [Show full text]
  • Mergers in the Digital Economy
    2020/01 DP Axel Gautier and Joe Lamesch Mergers in the digital economy CORE Voie du Roman Pays 34, L1.03.01 B-1348 Louvain-la-Neuve Tel (32 10) 47 43 04 Email: [email protected] https://uclouvain.be/en/research-institutes/ lidam/core/discussion-papers.html Mergers in the Digital Economy∗ Axel Gautier y& Joe Lamesch z January 13, 2020 Abstract Over the period 2015-2017, the five giant technologically leading firms, Google, Amazon, Facebook, Amazon and Microsoft (GAFAM) acquired 175 companies, from small start-ups to billion dollar deals. By investigating this intense M&A, this paper ambitions a better understanding of the Big Five's strategies. To do so, we identify 6 different user groups gravitating around these multi-sided companies along with each company's most important market segments. We then track their mergers and acquisitions and match them with the segments. This exercise shows that these five firms use M&A activity mostly to strengthen their core market segments but rarely to expand their activities into new ones. Furthermore, most of the acquired products are shut down post acquisition, which suggests that GAFAM mainly acquire firm’s assets (functionality, technology, talent or IP) to integrate them in their ecosystem rather than the products and users themselves. For these tech giants, therefore, acquisition appears to be a substitute for in-house R&D. Finally, from our check for possible "killer acquisitions", it appears that just a single one in our sample could potentially be qualified as such. Keywords: Mergers, GAFAM, platform, digital markets, competition policy, killer acquisition JEL Codes: D43, K21, L40, L86, G34 ∗The authors would like to thank M.
    [Show full text]
  • CURRICULUM FRAMEWORK and SYLLABUS for FIVE YEAR INTEGRATED M.Sc (DATA SCIENCE) DEGREE PROGRAMME in CHOICE BASED CREDIT SYSTEM
    Five year Integrated M.Sc (Data Science) Degree Programme 2019-2020 CURRICULUM FRAMEWORK AND SYLLABUS FOR FIVE YEAR INTEGRATED M.Sc (DATA SCIENCE) DEGREE PROGRAMME IN CHOICE BASED CREDIT SYSTEM FOR THE STUDENTS ADMITTED FROM THE ACADEMIC YEAR 2019-2020 ONWARDS THIAGARAJAR COLLEGE OF ENGINEERING (A Government Aided Autonomous Institution affiliated to Anna University) MADURAI – 625 015, TAMILNADU Phone: 0452 – 2482240, 41 Fax: 0452 2483427 Web: www.tce.edu BOS Meeting Approved: 21-12-2018 Approved in 57th Academic Council meeting on 05-01-2019 Five year Integrated M.Sc (Data Science) Degree Programme 2019-2020 THIAGARAJAR COLLEGE OF ENGINEERING, MADURAI 625 015 DEPARTMENT OF APPLIED MATHEMATICS AND COMPUTATIONAL SCIENCE VISION “Academic and research excellence in Computational Science” MISSION As a Department, We are committed to Achieve academic excellence in Computational Science through innovative teaching and learning processes. Enable the students to be technically competent to solve the problems faced by the industry. Create a platform for pursuing inter-disciplinary research among the faculty and the students to create state of art research facilities. Promote quality and professional ethics among the students. Help the students to learn entrepreneurial skills. BOS Meeting Approved: 21-12-2018 Approved in 57th Academic Council meeting on 05-01-2019 Five year Integrated M.Sc (Data Science) Degree Programme 2019-2020 Programme Educational Objectives (PEO) Post graduates of M.Sc.(Data Science) program will be PEO1: Utilizing strong quantitative aptitude and domain knowledge to apply quantitative modeling and data analysis techniques to provide solutions to the real world business problems. PEO2: Applying research and entrepreneurial skills augmented with a rich set of communication, teamwork and leadership skills to excel in their profession.
    [Show full text]
  • Mac Os X Database Application
    Mac Os X Database Application Splashy Moses always degum his Politburo if Barr is unprovident or unswathing but. Corny Ashton enervating hinderingly or evite ergo when Weylin is faceless. Butcherly Maurits sometimes cognizes his alodiums hard and rebelled so submissively! New platform for the next section names of your data source you to It tedious really disappointing the heir that amount has been zero progress with this issue, could this time. Also many question are using databases on their Macs such as. Expert users may configure the ODBC. This application that you. Check the app from zero progress with a tabbed format of applications that this, transforming raw data! DBeaver Community Free Universal Database Tool. Provide the administrator username and password. You exhibit even export your bay as an html-table and print labels. Understanding at precious glance. Best Database Management Software for Mac 2021 Reviews. What does Texas gain for not selling electricity across state lines and therefore avoiding Federal Power and oversight? Take this open snaptube will get into chartable form at first mac os x application functioning of your experience with live without using app. Transform all kinds of files into optimized for various displays PDFs with water motion. However, four of the defining features of this crime is it it comes with native TLS encryption to ensure that important business success never gets into these wrong hands. Get stomp to legal one million creative assets on Envato Elements. Fuzzee allows to mac os application has been easier for free file to the appropriate odbc data synchronization tool.
    [Show full text]
  • Confidentiality and Perfomance for Cloud Databases
    Research Collection Doctoral Thesis Confidentiality and Performance for Cloud Databases Author(s): Braun-Löhrer, Lucas Victor Publication Date: 2017 Permanent Link: https://doi.org/10.3929/ethz-a-010866596 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS. ETH NO. 24055 Confidentiality and Performance for Cloud Databases A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by LUCAS VICTOR BRAUN-LÖHRER Master of Science ETH in Computer Science, ETH Zurich born on June 18, 1987 citizen of Aadorf, Thurgau accepted on the recommendation of Prof. Dr. Donald A. Kossmann, examiner Prof. Dr. Timothy Roscoe, co-examiner Prof. Dr. Renée J. Miller, co-examiner Prof. Dr. Thomas Neumann, co-examiner 2017 Typeset with LATEX © 2017 by Lucas Victor Braun-Löhrer Permission to print for personal and academic use, as well as permission for electronic reproduction and dissemination in unaltered and complete form are granted. All other rights reserved. Abstract The twenty-first century is often called the century of the information society. Theamount of collected data world-wide is growing exponentially and has easily reached the order of several million terabytes a day. As a result, everybody is talking about “big data” nowadays, not only in the research communities, but also very prominently in enterprises, politics and the press. Thanks to popular services, like Apple iCloud, Dropbox, Amazon Cloud Drive, Google Apps or Microsoft OneDrive, cloud storage has become a (nearly) ubiquitous and widely-used facility in which a huge portion of this big data is stored and processed.
    [Show full text]
  • Mergers in the Digital Economy
    2020/01 DP Axel Gautier and Joe Lamesch Mergers in the digital economy CORE Voie du Roman Pays 34, L1.03.01 B-1348 Louvain-la-Neuve Tel (32 10) 47 43 04 Email: [email protected] https://uclouvain.be/en/research-institutes/ lidam/core/discussion-papers.html Mergers in the Digital Economy∗ Axel Gautier y& Joe Lamesch z January 13, 2020 Abstract Over the period 2015-2017, the five giant technologically leading firms, Google, Amazon, Facebook, Amazon and Microsoft (GAFAM) acquired 175 companies, from small start-ups to billion dollar deals. By investigating this intense M&A, this paper ambitions a better understanding of the Big Five's strategies. To do so, we identify 6 different user groups gravitating around these multi-sided companies along with each company's most important market segments. We then track their mergers and acquisitions and match them with the segments. This exercise shows that these five firms use M&A activity mostly to strengthen their core market segments but rarely to expand their activities into new ones. Furthermore, most of the acquired products are shut down post acquisition, which suggests that GAFAM mainly acquire firm’s assets (functionality, technology, talent or IP) to integrate them in their ecosystem rather than the products and users themselves. For these tech giants, therefore, acquisition appears to be a substitute for in-house R&D. Finally, from our check for possible "killer acquisitions", it appears that just a single one in our sample could potentially be qualified as such. Keywords: Mergers, GAFAM, platform, digital markets, competition policy, killer acquisition JEL Codes: D43, K21, L40, L86, G34 ∗The authors would like to thank M.
    [Show full text]
  • Making Transactional Key-Value Stores Verifiably
    Cobra: Making Transactional Key-Value Stores Verifiably Serializable Cheng Tan, Changgeng Zhao, Shuai Mu?, and Michael Walfish NYU Department of Computer Science, Courant Institute ?Stony Brook University Abstract. Today’s cloud databases offer strong properties, of its operation. Meanwhile, any internal corruption—as could including serializability, sometimes called the gold standard happen from misconfiguration, operational error, compromise, database correctness property. But cloud databases are compli- or adversarial control at any layer of the execution stack—can cated black boxes, running in a different administrative domain cause a serializability violation. Beyond that, one need not from their clients. Thus, clients might like to know whether adopt a paranoid stance (“the cloud as malicious adversary”) the databases are meeting their contract. To that end, we intro- to acknowledge that it is difficult, as a technical matter, to pro- duce cobra; cobra applies to transactional key-value stores. vide serializability and geo-distribution and geo-replication It is the first system that combines (a) black-box checking, of and high performance under various failures [40, 78, 147]. (b) serializability, while (c) scaling to real-world online trans- Doing so usually involves a consensus protocol that inter- actional processing workloads. The core technical challenge acts with an atomic commit protocol [69, 96, 103]—a com- is that the underlying search problem is computationally ex- plex combination, and hence potentially bug-prone. Indeed, pensive. Cobra tames that problem by starting with a suitable today’s production systems have exhibited serializability vio- SMT solver. Cobra then introduces several new techniques, lations [1, 18, 19, 25, 26] (see also §6.1).
    [Show full text]
  • Mergers in the Digital Economy
    3136 REPRINT Axel Gautier, Lamesch Joé Mergers in the Digital Economy Information Economics and Policy CORE Voie du Roman Pays 34, L1.03.01 B-1348 Louvain-la-Neuve Tel (32 10) 47 43 04 Email:[email protected] https://uclouvain.be/en/research-institutes/ lidam/core/reprints.html Mergers in the Digital Economy∗ Axel Gautier y& Joe Lamesch z June 2, 2020 Abstract Over the period 2015-2017, the five giant technologically leading firms, Google, Amazon, Facebook, Apple and Microsoft (GAFAM) acquired 175 companies, from small startups to billion dollar deals. In this paper, we provide detailed information and statistics on the merger activity of the GAFAM and on the characteristics of the firms they acquire. One of the most intriguing features of these acquisitions is that, in the majority of cases, the product of the target is discontinued under its original brand name post acquisition and this is especially true for the youngest firms. There are three reasons to discontinue a product post acquisition: the product is not as successful as expected, the acquisition was not motivated by the product itself but by the target's assets or R&D effort, or by the elimination of a potential competitive threat. While our data does not enable us to screen between these explanations, the present analysis shows that most of the startups are killed in their infancy. This important phenomenon calls for tighter intervention by competition authorities in merger cases involving big techs. Keywords: Mergers, GAFAM, platform, digital markets, competition policy, killer acquisition JEL Codes: D43, K21, L40, L86, G34 ∗The authors would like to thank P.
    [Show full text]
  • COSC349—Cloud Computing Architecture David Eyers Learning Objectives
    Cloud Middleware and MBaaS COSC349—Cloud Computing Architecture David Eyers Learning objectives • Outline the usefulness of middleware for developing applications that use cloud computing • Contrast Apple CloudKit and Google Firebase in terms of relationships between provider, tenants and clients • Describe typical MBaaS services • MBaaS—Mobile Back-end as a Service (seen in XaaS lecture) • Sketch pricing approach of CloudKit versus Firebase COSC349 Lecture 20, 2020 2 Middleware for cloud computing • Middleware: OS-like functionality for apps beyond OS • e.g., OS accesses local files; middleware accesses cloud files • (assumption that OS isn’t cloud aware is less realistic over time…) • Middleware often eases translation between platforms • Typically use middleware via a software library • … contrast this with programming directly against cloud APIs • Focus on two (Mobile) Backend as a Service offerings: • Apple's CloudKit (2014)—iCloud launched 2011 • Google’s Firebase (~2014)—Firebase launched in 2011 COSC349 Lecture 20, 2020 3 MBaaS versus typical AWS services • Amazon has a relationship with the tenant • … but does not explicitly have any link with tenants’ clients • Contrast MBaaS from Apple, Google, etc.: • company has relationship with the tenant (e.g., iOS app dev.) • company also has a relationship with the client • Differences between Apple, Google, etc. • Apple has a financial link by selling kit to tenants’ clients • Google interests in cross-service linkage—across search, etc. • (Amazon doesn’t seem to leverage Amazon+AWS
    [Show full text]
  • Paper Title (Use Style: Paper Title)
    International Association of Scientific Innovation and Research (IASIR) ISSN (Print): 2279-0047 (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Online): 2279-0055 International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net BIG DATA ANALYTICS ON THE CLOUD Dr. Tamanna Siddiqui, Mohammad Al Kadri Department Of Computer Science Aligarh Muslim University, Aligarh, INDIA Abstract: The late advancements in innovations and construction modeling made the big data –which is information examination procedure – one of the sultriest points in our present studies. For that big data applications get to be more the most required programming in our life, in light of the fact that all our business, social, instructive and medicinal studies consider this idea as nightmare specially when we discuss storing and accessing client information in business clouds, or mining of social information. Big data needs a gigantic hardware equipment and processing assets, this needs made the little and medium estimated organizations be perplexed from these expenses. Anyhow, cloud computing offers all these needs for big data implementation to those organizations. It gives services to internet by using assets of the figuring framework to give diverse services of the web. It makes customers and organizations have the capacity to control their applications without establishment and retrieve their own data at any computer through internet access. The significant concerns according to cloud computing are security and loss of control. This paper presents an overview of big data concept, its famous interactive analysis tools (Apache Hive, BigQuery, CitusDB, ..), big data software (MATLAB, R, SAS, STATA, … ) and their comparisons on the basis of specific parameters .This paper also focuses on Cloud computing services and its four models (PaaS, SaaS, IaaS, HaaS).
    [Show full text]