Falcondb: Blockchain-Based Collaborative Database

Total Page:16

File Type:pdf, Size:1020Kb

Falcondb: Blockchain-Based Collaborative Database FalconDB: Blockchain-based Collaborative Database Yanqing Peng Min Du Feifei Li University of Utah UC Berkeley University of Utah [email protected] [email protected] [email protected] Raymond Cheng Dawn Song UC Berkeley UC Berkeley [email protected] [email protected] ABSTRACT 1 INTRODUCTION Nowadays an emerging class of applications are based on The growth of the Internet has triggered tremendous op- collaboration over a shared database among different entities. portunities for data cooperation. Many applications could However, the existing solutions on shared database may benefit from a shared database among multiple parties, such require trust on others, as collaborative benchmarking [4], crowdsourcing [43], shar- have high hardware demand that is unaffordable for in- ing economy [5], cooperative scientific computation [14], dividual users, or have relatively low performance. In other and collaborative machine learning[7]. words, there is a trilemma among security, compatibility In such scenarios, a key opponent is a shared database and efficiency. In this paper, we present FalconDB, which management system that allows each party to execute up- enables different parties with limited hardware resources dates and queries to the database, while maintaining a con- to efficiently and securely collaborate on a database. Fal- sistent view among all participants in the network. Many conDB adopts database servers with verification interfaces projects require collaborations among a group of individual accessible to clients and stores the digests for query/update users. As individual participants, they may need to collabo- authentications on a blockchain. Using blockchain as a con- rate through their personal devices with limited storage and sensus platform and a distributed ledger, FalconDB is able computation power. Furthermore, they could be easily com- to work without any trust on each other. Meanwhile, Fal- promised and become malicious. Therefore, it is essential conDB requires only minimal storage cost on each client, to design a shared database that enables individual users to and provides anywhere-available, real-time and concurrent collaborate with each other without any trust. access to the database. As a result, FalconDB overcomes the Traditionally, individual users who have limited compu- disadvantages of previous solutions, and enables individual tational power and local storage typically utilize a central- users to participate in the collaboration with high efficiency, ized server to facilitate collaboration. Such a service enables low storage cost and blockchain-level security guarantees. anywhere-available, real-time, and concurrent access to the database. However, it requires to fully trust the central server, ACM Reference Format: which is expected to correctly execute all requests. One im- Yanqing Peng, Min Du, Feifei Li, Raymond Cheng, and Dawn Song. mediate concern is that a malicious server can fool any client 2020. FalconDB: Blockchain-based Collaborative Database. In Pro- without being detected. Another issue arises from the client ceedings of the 2020 ACM SIGMOD International Conference on Man- side: a client may manipulate the records arbitrarily for its agement of Data (SIGMOD’20), June 14–19, 2020, Portland, OR, USA. ACM, New York, NY, USA, 16 pages. https://doi.org/10.1145/3318464. own interest. A sophisticated mechanism must be adopted 3380594 to prevent clients from issuing undesired updates. The emergence of blockchain techniques brings practi- cal solutions to handle Byzantine failures [26], namely, a Permission to make digital or hard copies of all or part of this work for party could behave arbitrarily. Specifically, a permissioned personal or classroom use is granted without fee provided that copies are not blockchain platform could be viewed as a distributed system, made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components where there is a chain of blocks that keep immutable records. of this work owned by others than ACM must be honored. Abstracting with The blocks of records are agreed by all blockchain nodes credit is permitted. To copy otherwise, or republish, to post on servers or to via some consensus protocol that tolerates Byzantine fail- redistribute to lists, requires prior specific permission and/or a fee. Request ures. Such platforms include Hyperledger [2], Tendermint permissions from [email protected]. [8], HotStuff [47] and BigchainDB [33]. This design does not SIGMOD’20, June 14–19, 2020, Portland, OR, USA require a centralized server, and works in an untrusted en- © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-6735-6/20/06...$15.00 vironment with up to a small proportion (e.g., 1/3) of nodes https://doi.org/10.1145/3318464.3380594 being malicious. Undesired operations like malicious updates a traceable and tamper-free historical record of charitable could be denied by the system. giving [40]. Although blockchain systems have attracted much atten- Our contribution. We present FalconDB, which is a block- tion, it is still very rare for an individual user to participate chain database that has low hardware requirements (e.g., stor- as a node, due to the high cost. First, blockchain nodes need age, computation, bandwidth) on individual clients, achieves to locally store a full copy of the blocks, which could easily query efficiency as high as a centralized database design, and be hundreds of gigabytes, potentially exhausting local disk ensures security guarantees as strong as blockchain level. In space for individual users. Second, blockchain nodes must FalconDB, a blockchain node could either be a server node maintain network communication with others to promptly that stores the entire database, or a client node that uses receive and validate new blocks, which consumes both net- the database by sending query and update requests to the work bandwidth and computation power. Finally, since there server nodes. To grant the clients with the ability to verify is no longer a service provider to facilitate queries, users if an update or query is correctly executed, the database have to execute queries locally with their personal hardware. contents are stored with authenticated data structures (ADS) Executing complex queries with limited CPU power could on server nodes. The ADS generates a small piece of digest be unacceptably slow. for the current database content, and the clients can then The high cost on blockchain nodes limits the compati- use this digest to authenticate the results returned by the bility of blockchain. The concept of lightweight clients is server. The servers and clients are synchronized under a proposed to alleviate the cost on individual participants [35]. decentralized blockchain network. Each server node is a full Lightweight clients communicate with full nodes to update blockchain node that stores the entire database and block- or query the blockchain database. Hence their capabilities chain blocks, while each client node stores only the block are constrained by the APIs exposed by the full nodes. More- headers. The newest block header contains the digest of the over, query integrity needs to be ensured if the full nodes current database content, which the clients are able to access are not trustworthy [45]. A baseline solution is to leverage for authentication. With this digest, FalconDB could tolerate smart contracts [22]. A user could submit a query to a smart up to 1/3 of the total blockchain nodes being malicious, and contract. Then, all full nodes execute the query and run a is able to proceed even all full nodes are dishonest except Byzantine fault tolerance (BFT) consensus protocol on the one, which is enabled by the authenticated database design. query result. Once a consensus is reached, the result will be Note that, many previous works on outsourced database committed to blockchain as well as revealed to the client. (ODB) [21, 28, 29, 34, 36, 37, 46, 46, 49, 50] have studied the This approach guarantees integrity with a majority of honest problem of query authentication, which validates the results nodes. However, it has limited throughput, high latency, high of the database server with essential digests. Unfortunately gas consumption and introduces privacy concerns. they cannot be directly applied to the collaborative database None of the previous solutions is able to simultaneously scenario as we will show later in Section 2.3.1. achieve security, compatibility and efficiency. This becomes a We summarize our contributions as follows. paramount trilemma when we need all three properties. Con- sider the case of charitable giving, where a large number of donors make donations to multiple non-profit organizations • We propose and implement FalconDB, which to the best of (NGOs). An NGO needs to record each received donation, our knowledge, is the first platform that enables individual while an individual donor may want to audit how an NGO is users to collaborate on a database with strong security spending her money. In fact, such a database already exists guarantee, low storage cost, as well as high efficiency. [13], where a centralized database server decides the execu- • FalconDB leverages a blockchain platform which tolerates tion results of all updates and queries. In such a design, the up to 1/3 participants being malicious, as well as employs a database host is able to hide any possible corruption or mis- temporal
Recommended publications
  • Failures in DBMS
    Chapter 11 Database Recovery 1 Failures in DBMS Two common kinds of failures StSystem filfailure (t)(e.g. power outage) ‒ affects all transactions currently in progress but does not physically damage the data (soft crash) Media failures (e.g. Head crash on the disk) ‒ damagg()e to the database (hard crash) ‒ need backup data Recoveryyp scheme responsible for handling failures and restoring database to consistent state 2 Recovery Recovering the database itself Recovery algorithm has two parts ‒ Actions taken during normal operation to ensure system can recover from failure (e.g., backup, log file) ‒ Actions taken after a failure to restore database to consistent state We will discuss (briefly) ‒ Transactions/Transaction recovery ‒ System Recovery 3 Transactions A database is updated by processing transactions that result in changes to one or more records. A user’s program may carry out many operations on the data retrieved from the database, but the DBMS is only concerned with data read/written from/to the database. The DBMS’s abstract view of a user program is a sequence of transactions (reads and writes). To understand database recovery, we must first understand the concept of transaction integrity. 4 Transactions A transaction is considered a logical unit of work ‒ START Statement: BEGIN TRANSACTION ‒ END Statement: COMMIT ‒ Execution errors: ROLLBACK Assume we want to transfer $100 from one bank (A) account to another (B): UPDATE Account_A SET Balance= Balance -100; UPDATE Account_B SET Balance= Balance +100; We want these two operations to appear as a single atomic action 5 Transactions We want these two operations to appear as a single atomic action ‒ To avoid inconsistent states of the database in-between the two updates ‒ And obviously we cannot allow the first UPDATE to be executed and the second not or vice versa.
    [Show full text]
  • Blockchain Database for a Cyber Security Learning System
    Session ETD 475 Blockchain Database for a Cyber Security Learning System Sophia Armstrong Department of Computer Science, College of Engineering and Technology East Carolina University Te-Shun Chou Department of Technology Systems, College of Engineering and Technology East Carolina University John Jones College of Engineering and Technology East Carolina University Abstract Our cyber security learning system involves an interactive environment for students to practice executing different attack and defense techniques relating to cyber security concepts. We intend to use a blockchain database to secure data from this learning system. The data being secured are students’ scores accumulated by successful attacks or defends from the other students’ implementations. As more professionals are departing from traditional relational databases, the enthusiasm around distributed ledger databases is growing, specifically blockchain. With many available platforms applying blockchain structures, it is important to understand how this emerging technology is being used, with the goal of utilizing this technology for our learning system. In order to successfully secure the data and ensure it is tamper resistant, an investigation of blockchain technology use cases must be conducted. In addition, this paper defined the primary characteristics of the emerging distributed ledgers or blockchain technology, to ensure we effectively harness this technology to secure our data. Moreover, we explored using a blockchain database for our data. 1. Introduction New buzz words are constantly surfacing in the ever evolving field of computer science, so it is critical to distinguish the difference between temporary fads and new evolutionary technology. Blockchain is one of the newest and most developmental technologies currently drawing interest.
    [Show full text]
  • An Introduction to Cloud Databases a Guide for Administrators
    Compliments of An Introduction to Cloud Databases A Guide for Administrators Wendy Neu, Vlad Vlasceanu, Andy Oram & Sam Alapati REPORT Break free from old guard databases AWS provides the broadest selection of purpose-built databases allowing you to save, grow, and innovate faster Enterprise scale at 3-5x the performance 14+ database engines 1/10th the cost of vs popular alternatives - more than any other commercial databases provider Learn more: aws.amazon.com/databases An Introduction to Cloud Databases A Guide for Administrators Wendy Neu, Vlad Vlasceanu, Andy Oram, and Sam Alapati Beijing Boston Farnham Sebastopol Tokyo An Introduction to Cloud Databases by Wendy A. Neu, Vlad Vlasceanu, Andy Oram, and Sam Alapati Copyright © 2019 O’Reilly Media Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more infor‐ mation, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Development Editor: Jeff Bleiel Interior Designer: David Futato Acquisitions Editor: Jonathan Hassell Cover Designer: Karen Montgomery Production Editor: Katherine Tozer Illustrator: Rebecca Demarest Copyeditor: Octal Publishing, LLC September 2019: First Edition Revision History for the First Edition 2019-08-19: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. An Introduction to Cloud Databases, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views.
    [Show full text]
  • Middleware-Based Database Replication: the Gaps Between Theory and Practice
    Appears in Proceedings of the ACM SIGMOD Conference, Vancouver, Canada (June 2008) Middleware-based Database Replication: The Gaps Between Theory and Practice Emmanuel Cecchet George Candea Anastasia Ailamaki EPFL EPFL & Aster Data Systems EPFL & Carnegie Mellon University Lausanne, Switzerland Lausanne, Switzerland Lausanne, Switzerland [email protected] [email protected] [email protected] ABSTRACT There exist replication “solutions” for every major DBMS, from Oracle RAC™, Streams™ and DataGuard™ to Slony-I for The need for high availability and performance in data Postgres, MySQL replication and cluster, and everything in- management systems has been fueling a long running interest in between. The naïve observer may conclude that such variety of database replication from both academia and industry. However, replication systems indicates a solved problem; the reality, academic groups often attack replication problems in isolation, however, is the exact opposite. Replication still falls short of overlooking the need for completeness in their solutions, while customer expectations, which explains the continued interest in developing new approaches, resulting in a dazzling variety of commercial teams take a holistic approach that often misses offerings. opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. Even the “simple” cases are challenging at large scale. We deployed a replication system for a large travel ticket brokering This paper aims to characterize the gap along three axes: system at a Fortune-500 company faced with a workload where performance, availability, and administration. We build on our 95% of transactions were read-only. Still, the 5% write workload own experience developing and deploying replication systems in resulted in thousands of update requests per second, which commercial and academic settings, as well as on a large body of implied that a system using 2-phase-commit, or any other form of prior related work.
    [Show full text]
  • Concurrency Control
    Concurrency Control Instructor: Matei Zaharia cs245.stanford.edu Outline What makes a schedule serializable? Conflict serializability Precedence graphs Enforcing serializability via 2-phase locking » Shared and exclusive locks » Lock tables and multi-level locking Optimistic concurrency with validation Concurrency control + recovery CS 245 2 Lock Modes Beyond S/X Examples: (1) increment lock (2) update lock CS 245 3 Example 1: Increment Lock Atomic addition action: INi(A) {Read(A); A ¬ A+k; Write(A)} INi(A), INj(A) do not conflict, because addition is commutative! CS 245 4 Compatibility Matrix compat S X I S T F F X F F F I F F T CS 245 5 Update Locks A common deadlock problem with upgrades: T1 T2 l-S1(A) l-S2(A) l-X1(A) l-X2(A) --- Deadlock --- CS 245 6 Solution If Ti wants to read A and knows it may later want to write A, it requests an update lock (not shared lock) CS 245 7 Compatibility Matrix New request compat S X U S T F Lock already X F F held in U CS 245 8 Compatibility Matrix New request compat S X U S T F T Lock already X F F F held in U F F F Note: asymmetric table! CS 245 9 How Is Locking Implemented In Practice? Every system is different (e.g., may not even provide conflict serializable schedules) But here is one (simplified) way ... CS 245 10 Sample Locking System 1. Don’t ask transactions to request/release locks: just get the weakest lock for each action they perform 2.
    [Show full text]
  • Database Technology for Bioinformatics from Information Retrieval to Knowledge Systems
    Database Technology for Bioinformatics From Information Retrieval to Knowledge Systems Luis M. Rocha Complex Systems Modeling CCS3 - Modeling, Algorithms, and Informatics Los Alamos National Laboratory, MS B256 Los Alamos, NM 87545 [email protected] or [email protected] 1 Molecular Biology Databases 3 Bibliographic databases On-line journals and bibliographic citations – MEDLINE (1971, www.nlm.nih.gov) 3 Factual databases Repositories of Experimental data associated with published articles and that can be used for computerized analysis – Nucleic acid sequences: GenBank (1982, www.ncbi.nlm.nih.gov), EMBL (1982, www.ebi.ac.uk), DDBJ (1984, www.ddbj.nig.ac.jp) – Amino acid sequences: PIR (1968, www-nbrf.georgetown.edu), PRF (1979, www.prf.op.jp), SWISS-PROT (1986, www.expasy.ch) – 3D molecular structure: PDB (1971, www.rcsb.org), CSD (1965, www.ccdc.cam.ac.uk) Lack standardization of data contents 3 Knowledge Bases Intended for automatic inference rather than simple retrieval – Motif libraries: PROSITE (1988, www.expasy.ch/sprot/prosite.html) – Molecular Classifications: SCOP (1994, www.mrc-lmb.cam.ac.uk) – Biochemical Pathways: KEGG (1995, www.genome.ad.jp/kegg) Difference between knowledge and data (semiosis and syntax)?? 2 Growth of sequence and 3D Structure databases Number of Entries 3 Database Technology and Bioinformatics 3 Databases Computerized collection of data for Information Retrieval Shared by many users Stored records are organized with a predefined set of data items (attributes) Managed by a computer program: the database
    [Show full text]
  • Sundial: Harmonizing Concurrency Control and Caching in a Distributed OLTP Database Management System
    Sundial: Harmonizing Concurrency Control and Caching in a Distributed OLTP Database Management System Xiangyao Yu, Yu Xia, Andrew Pavlo♠ Daniel Sanchez, Larry Rudolph|, Srinivas Devadas Massachusetts Institute of Technology, ♠Carnegie Mellon University, |Two Sigma Investments, LP [email protected], [email protected], [email protected] [email protected], [email protected], [email protected] ABSTRACT and foremost, long network delays lead to long execution time of Distributed transactions suffer from poor performance due to two distributed transactions, making them slower than single-partition major limiting factors. First, distributed transactions suffer from transactions. Second, long execution time increases the likelihood high latency because each of their accesses to remote data incurs a of contention among transactions, leading to more aborts and per- long network delay. Second, this high latency increases the likeli- formance degradation. hood of contention among distributed transactions, leading to high Recent work on improving distributed concurrency control has abort rates and low performance. focused on protocol-level and hardware-level improvements. Im- We present Sundial, an in-memory distributed optimistic concur- proved protocols can reduce synchronization overhead among trans- rency control protocol that addresses these two limitations. First, to actions [22, 35, 36, 46], but can still limit performance due to high reduce the transaction abort rate, Sundial dynamically determines coordination overhead (e.g., lock blocking). Hardware-level im- the logical order among transactions at runtime, based on their data provements include using special hardware like optimized networks access patterns. Sundial achieves this by applying logical leases to that enable low-latency remote memory accesses [20, 48, 55], which each data element, which allows the database to dynamically calcu- can increase the overall cost of the system.
    [Show full text]
  • What Is a Database? Differences Between the Internet and Library
    What is a Database? Library databases are mostly full-text material (in their entirety) and summaries or descriptions of articles. They are collections of articles from newspapers, magazines and journals and electronic reference sources. Databases are selected for the quality and variety of resources they offer and are accessed using the Internet. Your library pays for you to have access to a number of relevant databases. You support this with your tuition, so get the most out of your money! You can access them from home or school via the Library Webpage or use the link below. http://www.mxcc.commnet.edu/Content/Find_Articles.asp Two short videos on the benefits of using library databases: http://www.youtube.com/watch?v=VUp1P-ubOIc http://youtu.be/Q2GMtIuaNzU Differences Between the Internet and Library Databases The Internet Library Databases Examples Google, Yahoo, Bing LexisNexis, Literary Reference Center or Health and Wellness Resource Center Review process None – anyone can add Checked for accuracy by content to the Web. publishers. Chosen by your college’s library. Includes “peer-reviewed” scholarly articles. Reliability Unknown Very No quality control mechanisms! Content Anything, from pictures of a Scholarly journal articles, Book person’s pets to personal reviews, Research papers, (usually not researched and Conference papers, and other unsubstantiated) opinions on scholarly information gun control, abortion, etc. How often updated Unknown/varies. Regularly – daily, quarterly monthly Cost “Free” but some of the info Library has paid for you to you may need for your access these databases. assignment requires a fee. Organization Very little or no organization Very organized Availability Websites come and go.
    [Show full text]
  • Data Quality Requirements Analysis and Modeling December 1992 TDQM-92-03 Richard Y
    Published in the Ninth International Conference of Data Engineering Vienna, Austria, April 1993 Data Quality Requirements Analysis and Modeling December 1992 TDQM-92-03 Richard Y. Wang Henry B. Kon Stuart E. Madnick Total Data Quality Management (TDQM) Research Program Room E53-320 Sloan School of Management Massachusetts Institute of Technology Cambridge, MA 02139 USA 617-253-2656 Fax: 617-253-3321 Acknowledgments: Work reported herein has been supported, in part, by MITís Total Data Quality Management (TDQM) Research Program, MITís International Financial Services Research Center (IFSRC), Fujitsu Personal Systems, Inc. and Bull-HN. The authors wish to thank Gretchen Fisher for helping prepare this manuscript. To Appear in the Ninth International Conference on Data Engineering Vienna, Austria April 1993 Data Quality Requirements Analysis and Modeling Richard Y. Wang Henry B. Kon Stuart E. Madnick Sloan School of Management Massachusetts Institute of Technology Cambridge, Mass 02139 [email protected] ABSTRACT Data engineering is the modeling and structuring of data in its design, development and use. An ultimate goal of data engineering is to put quality data in the hands of users. Specifying and ensuring the quality of data, however, is an area in data engineering that has received little attention. In this paper we: (1) establish a set of premises, terms, and definitions for data quality management, and (2) develop a step-by-step methodology for defining and documenting data quality parameters important to users. These quality parameters are used to determine quality indicators, to be tagged to data items, about the data manufacturing process such as data source, creation time, and collection method.
    [Show full text]
  • Concurrency Control and Recovery ACID • Transactions • Recovery Transaction Model Concurency Control Recovery
    Data Management Systems • Transaction Processing • Concurrency control and recovery ACID • Transactions • Recovery Transaction model Concurency Control Recovery Gustavo Alonso Institute of Computing Platforms Department of Computer Science ETH Zürich Transactions-CC&R 1 A bit of theory • Before discussing implementations, we will cover the theoretical underpinning behind concurrency control and recovery • Discussion at an abstract level, without relation to implementations • No consideration of how the concepts map to real elements (tuples, pages, blocks, buffers, etc.) • Theoretical background important to understand variations in implementations and what is considered to be correct • Theoretical background also key to understand how system have evolved over the years Transactions-CC&R 2 Reference Concurrency Control and Recovery in Database Systems Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman • https://www.microsoft.com/en- us/research/people/philbe/book/ Transactions-CC&R 3 ACID Transactions-CC&R 4 Conventional notion of database correctness • ACID: • Atomicity: the notion that an operation or a group of operations must take place in their entirety or not at all • Consistency: operations should take the database from a correct state to another correct state • Isolation: concurrent execution of operations should yield results that are predictable and correct • Durability: the database needs to remember the state it is in at all moments, even when failures occur • Like all acronyms, more effort in making it sound cute than in
    [Show full text]
  • CAS CS 460/660 Introduction to Database Systems Transactions
    CAS CS 460/660 Introduction to Database Systems Transactions and Concurrency Control 1.1 Recall: Structure of a DBMS Query in: e.g. “Select min(account balance)” Data out: Database app e.g. 2000 Query Optimization and Execution Relational Operators These layers Access Methods must consider concurrency Buffer Management control and recovery Disk Space Management Customer accounts stored on disk1.2 = File System vs. DBMS? ■ Thought Experiment 1: ➹ You and your project partner are editing the same file. ➹ You both save it at the same time. ➹ Whose changes survive? A) Yours B) Partner’s C) Both D) Neither E) ??? • Thought Experiment 2: Q: How do you write programs over a – You’re updating a file. subsystem when it – The power goes out. promises you only “???” ? – Which of your changes survive? A: Very, very carefully!! A) All B) None C) All Since last save D ) ??? 1.3 Concurrent Execution ■ Concurrent execution essential for good performance. ➹ Because disk accesses are frequent, and relatively slow, it is important to keep the CPU humming by working on several user programs concurrently. ➹ Trends are towards lots of cores and lots of disks. § e.g., IBM Watson has 2880 processing cores ■ A program may carry out many operations, but the DBMS is only concerned about what data is read/written from/to the database. 1.4 Key concept: Transaction ■ an atomic sequence of database actions (reads/writes) ■ takes DB from one consistent state to another ■ transaction - DBMS’s abstract view of a user program: ➹ a sequence of reads and writes. transaction
    [Show full text]
  • DBOS: a Proposal for a Data-Centric Operating System
    DBOS: A Proposal for a Data-Centric Operating System The DBOS Committee∗ [email protected] Abstract Current operating systems are complex systems that were designed before today's computing environments. This makes it difficult for them to meet the scalability, heterogeneity, availability, and security challenges in current cloud and parallel computing environments. To address these problems, we propose a radically new OS design based on data-centric architecture: all operating system state should be represented uniformly as database tables, and operations on this state should be made via queries from otherwise stateless tasks. This design makes it easy to scale and evolve the OS without whole-system refactoring, inspect and debug system state, upgrade components without downtime, manage decisions using machine learning, and implement sophisticated security features. We discuss how a database OS (DBOS) can improve the programmability and performance of many of today's most important applications and propose a plan for the development of a DBOS proof of concept. 1 Introduction Current operating systems have evolved over the last forty years into complex overlapping code bases [70,4, 51, 57], which were architected for very different environments than exist today. The cloud has become a preferred platform, for both decision support and online serving applications. Serverless computing supports the concept of elastic provision of resources, which is very attractive in many environments. Machine learning (ML) is causing many applications to be redesigned, and future operating systems must intimately support such applications. Hardware arXiv:2007.11112v1 [cs.OS] 21 Jul 2020 is becoming massively parallel and heterogeneous. These \sea changes" make it imperative to rethink the architecture of system software, which is the topic of this paper.
    [Show full text]