Brodie Book Introduction Final 050218

Total Page:16

File Type:pdf, Size:1020Kb

Brodie Book Introduction Final 050218 Introduction Michael L. Brodie, Computer Systems and Artificial Intelligence Laboratory, Massachusetts Institute of Technology Our story begins at University of California, Berkeley in 1971, in the heat of the Vietnam War. A tall, ambitious, newly minted assistant professor with an EE Ph.D. in Markov chains asks a seasoned colleague for direction in shaping his nascent career for more impact than he could imagine possible with Markov chains.1 Professor Eugene Wong suggests that Michael Stonebraker read Ted Codd’s just- published paper on a striking new idea: relational databases. Upon reading the paper, Mike is immediately convinced of the potential, even though he knows essentially nothing about databases. He sets his sights on that pristine relational mountaintop. The rest is history. Or rather, it is the topic of this book. Ted’s simple, elegant, relational data model and Mike’s contributions to making databases work in practice helped forge what is today a $55 billion industry. But, wait, I’m getting ahead of myself. A Brief History of Databases What's a database? What is data management? How did they evolve? Imagine accelerating by orders of magnitude the discovery of cancer causes and the most effective treatments to radically reduce the 10M annual deaths worldwide. Or enabling autonomous vehicles to profoundly reduce the 1M annual traffic deaths worldwide, while reducing pollution, traffic congestion, and real estate wasted on vehicles that on average are parked 97% of the time. These are merely two examples of the future potential positive impacts of using Big Data. As with all technical advances, there is also potential for negative impacts, both unintentional—in error—and intentional such as undermining modern democracies (allegedly well under way). Using data means managing data efficiently at scale; that’s what data management is for. In its May 6, 2017 issue, The Economist declared data to be the world’s most valuable resource. In 2012, data science—often AI-driven analysis of data at scale—exploded on the world stage. This new, data-driven discovery paradigm may be one of the most significant advances of the early 21st century. Non-practitioners are always surprised to find that 80% of the resources required for a data science project are devoted to data management. The surprise stems from the fact that data management is an infrastructure technology: basically, unseen plumbing. In business, data management “just gets done” by mature, robust database management systems (DBMSs). But data science poses new, significant data management challenges that have yet to be understood let alone addressed. The above potential benefits, risks, and challenges herald a new era in the development of data management technology, the topic of this book. How did it develop in the first place? What were the previous eras? What challenges lie ahead? This introduction briefly sketches answers to those questions. This book is about the development and ascendency of data management technology enabled by the contributions of Michael Stonebraker and his collaborators. Novel when created in the 1960s, data management became the key enabling technology for businesses of all sizes worldwide, leading to today’s $55B2,3 DBMS market and tens of millions of operational databases. The average Fortune 100 company has more than 5,000 operational databases, supported by tens of DBMS products. 1 “I had to publish, and my thesis topic was going nowhere.”—Mike Stonebraker 2 https://www.statista.com/statistics/724611/worldwide-database-market/ 3 Numbers quoted in this chapter are as of mid-2018. 1 Databases support your daily activities, such as securing your banking and your credit card transactions. So that you can buy that Starbucks latte, Visa, the leading global payments processor, must be able to simultaneously process 50,000 credit card transactions per second. These “database” transactions update not just your account and those of 49,999 other Visa cardholders, but also those of 50,000 creditors like Starbucks, while simultaneously validating you, Starbucks, and 99,998 others for fraud, no matter where your card was swiped on the planet. Another slice of your financial world may involve one of the more than 3.8B trade transactions that occur daily on major U.S. market exchanges. DBMSs support such critical functions not just for financial systems, but also for systems that manage inventory, air traffic control, supply chains, and all daily functions that depend on data and data transactions. Databases managed by DBMSs are even on your wrist and in your pocket if you, like 2.3B others on the planet, use an iPhone or Android smartphone. If databases stopped, much of our world would stop with them. A database is a logical collection of data, like your credit card account information, often stored in records that are organized under some data model, such as tables in a relational database. A DBMS is a software system that manages databases and ensures persistence—so data is not lost—with languages to insert, update, and query data, often at very high data volumes and low latencies, as illustrated above. Data management technologies have evolved through four eras over six decades, as is illustrated in the Data Management Technology Kairometer. In the inaugural navigational era (1960s), the first DBMSs emerged. In them, data, such as your mortgage information, was structured in hierarchies or networks and accessed using record-at-a-time navigation query languages. The navigational era gained a database Turing Award in 1973 for Charlie Bachman’s “outstanding contributions to database technology.” In the second, relational era (1970s-1990s), data was stored in tables accessed using a declarative, set-at-a-time query language, SQL: for example, “Select name, grade From students in Engineering with a B average.” The relational era ended with approximately 30 commercial DBMSs dominated by Oracle’s Oracle, IBM’s DB2, and Microsoft’s SQL Server. The relational era gained two database Turing Awards: one in 1981 for Ted Codd’s “fundamental and continuing contributions to the theory and practice of database management systems, esp. relational databases” and one in 1998 for Jim Gray’s “seminal contributions to database and transaction processing research and technical leadership in system implementation.” The start of Mike Stonebraker’s database career coincided with the launch of the relational era. Serendipitously, Mike was directed by his colleague, UC Berkeley professor Eugene Wong, to the topic of data management and to the relational model via Ted’s paper. Attracted by the model’s simplicity compared to that of navigational DBMSs, Mike set his sights, and ultimately built his career, on making relational databases work. His initial contributions, Ingres and Postgres, did more to make relational databases work in practice than those of any other individual. After more than 30 years, Postgres—via PostgreSQL and other derivatives—continues to have a significant impact. PostgreSQL is the third4 or fourth5 most popular (used) of hundreds of DBMSs, and all relational DBMSs implement the object- relational data model and features introduced in Postgres. In the first relational decade, researchers developed core RDBMS capabilities including query optimization, transactions, and distribution. In the second relational decade, researchers focused on high- performance queries for data warehouses (using column stores) and high-performance transactions and real time analysis (using in-memory databases); extended RDBMSs to handle additional data types and processing (using abstract data types); and tested the “one-size-fits-all” principle by fitting myriad application types into RDBMSs. But complex data structures and operations—such as those in geographic information, graphs, and scientific processing over sparse matrices—just didn’t fit. Mike knew because he pragmatically exhausted that premise using real, complex database applications to push RDBMS limits, eventually concluding that “one-size-does-not-fit-all” and moving to special-purpose databases. With the rest of the database research community, Mike turned his attention to special-purpose databases, launching the third, non-relational era (2000-2010). Researchers in the non-relational era 4 https://www.eversql.com/most-popular-databases-in-2018-according-to-stackoverflow-survey/ 5 https://db-engines.com/en/ranking 2 developed data management solutions for non-relational data and associated processing (e.g., time series, semi- and un-structured data, key-value data, graphs, documents) in which data was stored in special-purpose forms and accessed using non-SQL query languages, called NoSQL, of which Hadoop is the best-known example. Mike pursued non-relational challenges in the data manager but, claiming inefficiency of NoSQL, continued to leverage SQL’s declarative power to access data using SQL-like languages, including an extension called NewSQL. New DBMSs proliferated, due to the research push for and application pull of specialized DBMSs, and the growth of open source software. The non-relational era ended with more than 350 DBMSs, split evenly between commercial and open source and supporting specialized data and related processing including (in order of utilization): relational, key-values, documents, graphs, time series, RDF, objects (object-oriented), search, wide columns, multi-value/dimensional, native XML, content (e.g., digital, text, image), events, and navigational. Despite the choice and diversity
Recommended publications
  • Oracle Vs. Nosql Vs. Newsql Comparing Database Technology
    Oracle vs. NoSQL vs. NewSQL Comparing Database Technology John Ryan Senior Solution Architect, Snowflake Computing Table of Contents The World has Changed . 1 What’s Changed? . 2 What’s the Problem? . .. 3 Performance vs. Availability and Durability . 3 Consistecy vs. Availability . 4 Flexibility vs . Scalability . 5 ACID vs. Eventual Consistency . 6 The OLTP Database Reimagined . 7 Achieving the Impossible! . .. 8 NewSQL Database Technology . 9 VoltDB . 10 MemSQL . 11 Which Applications Need NewSQL Technology? . 12 Conclusion . 13 About the Author . 13 ii The World has Changed The world has changed massively in the past 20 years. Back in the year 2000, a few million users connected to the web using a 56k modem attached to a PC, and Amazon only sold books. Now billions of people are using to their smartphone or tablet 24x7 to buy just about everything, and they’re interacting with Facebook, Twitter and Instagram. The pace has been unstoppable . Expectations have also changed. If a web page doesn’t refresh within seconds we’re quickly frustrated, and go elsewhere. If a web site is down, we fear it’s the end of civilisation as we know it. If a major site is down, it makes global headlines. Instant gratification takes too long! — Ladawn Clare-Panton Aside: If you’re not a seasoned Database Architect, you may want to start with my previous articles on Scalability and Database Architecture. Oracle vs. NoSQL vs. NewSQL eBook 1 What’s Changed? The above leads to a few observations: • Scalability — With potentially explosive traffic growth, IT systems need to quickly grow to meet exponential numbers of transactions • High Availability — IT systems must run 24x7, and be resilient to failure.
    [Show full text]
  • Demonstrate an Understanding of Computer Database Management Systems 114049
    Demonstrate an understanding of Computer Database Management Systems 114049 PURPOSE OF THE UNIT STANDARD This unit standard is intended: To provide a fundamental knowledge of the areas covered For those working in, or entering the workplace in the area of Information Technology As additional knowledge for those wanting to understand the areas covered People credited with this unit standard are able to: Describe data management issues and how it is addressed by a DBMS. Describe commonly implemented features of commercial computer DBMS`s Describe different type of DBMS`s Review DBMS end-user tools The performance of all elements is to a standard that allows for further learning in this area. LEARNING ASSUMED TO BE IN PLACE AND RECOGNITION OF PRIOR LEARNING The credit value of this unit is based on a person having prior knowledge and skills to: Demonstrate an understanding of fundamental mathematics (at least NQF level 3). Demonstrate PC competency skills (End-User Computing unit Standards, at least up to NQF level 3) NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 5 V-1 PAGE 50 INDEX Competence Requirements Page Unit Standard 114049 alignment index Here you will find the different outcomes explained which you need to be 52 proved competent in, in order to complete the Unit Standard 114049. Unit Standard 114049 54 Describe data management issues 57 Commonly implemented features of commercial database management systems 63 Different types of DBMS’s 67 Review DBMS end-user tools 73 Self-assessment Once you have completed all the questions after being facilitated, you need to check the progress you have made.
    [Show full text]
  • The Declarative Imperative Experiences and Conjectures in Distributed Logic
    The Declarative Imperative Experiences and Conjectures in Distributed Logic Joseph M. Hellerstein University of California, Berkeley [email protected] ABSTRACT The juxtaposition of these trends presents stark alternatives. The rise of multicore processors and cloud computing is putting Will the forecasts of doom and gloom materialize in a storm enormous pressure on the software community to find solu- that drowns out progress in computing? Or is this the long- tions to the difficulty of parallel and distributed programming. delayed catharsis that will wash away today’s thicket of im- At the same time, there is more—and more varied—interest in perative languages, preparing the ground for a more fertile data-centric programming languages than at any time in com- declarative future? And what role might the database com- puting history, in part because these languages parallelize nat- munity play in shaping this future, having sowed the seeds of urally. This juxtaposition raises the possibility that the theory Datalog over the last quarter century? of declarative database query languages can provide a foun- Before addressing these issues directly, a few more words dation for the next generation of parallel and distributed pro- about both crisis and opportunity are in order. gramming languages. 1.1 Urgency: Parallelism In this paper I reflect on my group’s experience over seven years using Datalog extensions to build networking protocols I would be panicked if I were in industry. and distributed systems. Based on that experience, I present — John Hennessy, President, Stanford University [35] a number of theoretical conjectures that may both interest the database community, and clarify important practical issues in The need for parallelism is visible at micro and macro scales.
    [Show full text]
  • L21: Data Models & Relational Algebra
    L21: Data models & Relational Algebra CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 1 Announcements! • Exam 2 comments - Based on feedback: Separate "select all" and negative points • We may have "select all" again: but minimum points is 0 • We may have negative points again: but for MCQ only (select one single answer) • Please don't yet submit HW7 - We are adopting Gradescope to handle your submissions too! • Today - Exam 2 take-aways - Data models - Relational Algebra 2 Schedule 3 From 1 = Q3 to 11 = Q13 (thus add 2 to the number) 4 5 Grading from the "other side" 6 Q6 7 Q6 8 Q12 Answer: only the first! Also see post on Piazza: https://piazza.com/class/jj55fszwtpj7fx?cid=135 9 10 L21: A short history of data models Based on article "What goes around comes around", Hellerstein, Stonebraker, 2005. Several slides courtesy of Dan Suciu CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 11 Hierarchical data Source: https://en.wikipedia.org/wiki/Nested_set_model 12 Hierarchies are powerful, but can be misleading... 13 “Data Model” • Applications need to model real-world data - Typically includes entities and relationships between them - Entities: e.g. students, courses, products, clients - Relationships: e.g. course registrations, product purchases • A data model enables a user to define the data using high-level constructs without worrying about many low-level details of how data will be stored on disk 14 Levels of Abstraction schema as seen
    [Show full text]
  • What Is Database? Types and Examples
    What is database? Types and Examples Visit our site for more information: www.examplanning.com Facebook Page: https://www.facebook.com/examplanning10/ Twitter: https://twitter.com/examplanning10 TABLE OF CONTENTS Sr. Description 1 What is database? 2 Different definitions of database 3 Growth of Database 4 Elements of Database 5 Components of database 6 Database System Environment 7 Types of Databas 8 Characteristics of database 9 Advantages of Database 10 Disadvantages of Database What is Database? A database is a collection of information or data which are organized in such a way that it can be easily accessed, managed and retrieved. Database is abbreviated ad DB. Different definitions of database. “a usually large collection of data organized especially for rapid search and retrieval (as by a computer) an online database” (merriam-webster) “a comprehensive collection of related data organized for convenient access, generally in a computer.” (dictionary) A database is an organized collection of data. (Wikipedia) What is data? It is used as both singular and plural form. It can be a quantity, symbol or character on which operations are performed. Data is information which are converted into digital form. Growth of Database Database was evolved in 1960's started with the hierarchical database. Relational database was invented by EF Codd in 1970s while object oriented database was invented in 1980s. In 1990s object oriented database rose with the growth of object oriented programming languages. Nowadays, databases with SQL and NoSQL are popular. Elements of Database Database elements are fields, rows, columns, tables. All these are building blocks of database.
    [Show full text]
  • The Best Nurturers in Computer Science Research
    The Best Nurturers in Computer Science Research Bharath Kumar M. Y. N. Srikant IISc-CSA-TR-2004-10 http://archive.csa.iisc.ernet.in/TR/2004/10/ Computer Science and Automation Indian Institute of Science, India October 2004 The Best Nurturers in Computer Science Research Bharath Kumar M.∗ Y. N. Srikant† Abstract The paper presents a heuristic for mining nurturers in temporally organized collaboration networks: people who facilitate the growth and success of the young ones. Specifically, this heuristic is applied to the computer science bibliographic data to find the best nurturers in computer science research. The measure of success is parameterized, and the paper demonstrates experiments and results with publication count and citations as success metrics. Rather than just the nurturer’s success, the heuristic captures the influence he has had in the indepen- dent success of the relatively young in the network. These results can hence be a useful resource to graduate students and post-doctoral can- didates. The heuristic is extended to accurately yield ranked nurturers inside a particular time period. Interestingly, there is a recognizable deviation between the rankings of the most successful researchers and the best nurturers, which although is obvious from a social perspective has not been statistically demonstrated. Keywords: Social Network Analysis, Bibliometrics, Temporal Data Mining. 1 Introduction Consider a student Arjun, who has finished his under-graduate degree in Computer Science, and is seeking a PhD degree followed by a successful career in Computer Science research. How does he choose his research advisor? He has the following options with him: 1. Look up the rankings of various universities [1], and apply to any “rea- sonably good” professor in any of the top universities.
    [Show full text]
  • SQL Vs Nosql
    SQL vs NoSQL By: Mohammed-Ali Khan Email: [email protected] Date: October 21, 2012 YORK UNIVERSITY Agenda ● History of DBMS – why relational is most popular? ● Types of DBMS – brief overview ● Main characteristics of RDBMS ● Main characteristics of NoSQL DBMS ● SQL vs NoSQL ● DB overview – Cassandra ● Criticism of NoSQL ● Some Predictions / Conclusions History of DBMS – why relational is most popular? ● 19th century: US Government needed reports from large datasets to conduct nation-wide census. ● 1890: Herman Hollerith creates the first automatic processing equipment which allowed American census to be conducted. ● 1911: IBM founded. ● 1960s: Efforts to standardize the database technology – US DoD: Conference on Data Systems Language (Codasyl). History of DBMS – why relational is most popular? ● 1968: IBM introduced IMS DBMS (a hierarchical DBMS). ● IBM employee Edgar Codd not satisfied, quoted as saying: – “Taking the old line view, that the burden of finding information should be placed on users...” ● Edgar Codd publishes a paper “A relational Model of Data for large shared Data Banks” – Key points: Independence of data from hardware/storage implementation – High level non-procedural language for data access – Pointers/keys (primary/secondary) History of DBMS – why relational is most popular? ● 1970s: Two database projects launched based on relational model – Ingres (Department of Defence Initiative) – System R (IBM) ● Ingres had its own query language called QUEL whereas System R used SQL. ● Larry Ellison, who worked at IBM, read publications of the System R group and eventually founded ORACLE. History of DBMS – why relational is most popular? ● 1980: IBM introduced SQL for the mainframe market. ● In a nutshell, American Government's requirements had a strong role in development of the relational DBMS.
    [Show full text]
  • Nosql: the New Normal
    NoSQL: The New Normal Matt Asay (@mjasay) VP, Corporate Strategy, 10gen 10gen Overview 180+ employees 500+ customers Offices in New York, Palo Alto, Washington Over $81 million in funding DC, London, Dublin, Barcelona and Sydney 2 Global Community 3,800,000+ MongoDB Downloads 47,000+ Online Education Registrants 15,000+ MongoDB User Group Members 14,000+ MongoDB Monitoring Service Users (MMS) 10,000+ Annual MongoDB Days Attendees 3 the past… is prologue? Back to the Future? What has been will be again, what has been done will be done again; there is nothing new under the sun. (Ecclesiastes 1:9, NIV) 5 What Do You Remember about 1969? 6 nothing? hold that thought 7 Back to the Future: PreSQL • IBM’s IMS (1969) – Developed as part of the Apollo Project • IDS (Integrated Data Store), navigational database, 1973 • High performance but: – Forced developers to worry about both query design and schema design upfront – Made it hard to change anything mid-stream 8 Enter SQL • Designed to overcome “PreSQL” deficiencies – Decoupled query design from schema design – Allowed developers to focus on schema design – Could be confident that you could query the data as you wanted later • 30 years of dominance later… 9 …the present… RDBMS Is Like a Spreadsheet 11 With “Relations” Between Rows 12 Lots of relations. Lots of rows. 13 It Hides What You’re Really Doing 14 It Makes Development Hard Code XML Config DB Schema Object Relational Application Relational Mapping Database 15 And Makes Things Hard to Change New Table New Column New Table Name Pet Phone
    [Show full text]
  • DATA BASE RESEARCH at BERKELEY Michael Stonebraker
    DATA BASE RESEARCH AT BERKELEY Michael Stonebraker Department of Electrical Engineering and Computer Science University of California, Berkeley 1. INTRODUCTION the performance of the prototype, finishing the imple- Data base research at Berkeley can be decom- mentation of promised function, and on integrating ter- posed into four categories. First there is the POSTGRES tiary memory into the POSTGRES environment in a project led by Michael Stonebraker, which is building a more flexible way. next-generation DBMS with novel object management, General POSTGRES information can be obtained knowledge management and time travel capabilities. from the following references. Second, Larry Rowe leads the PICASSO project which [MOSH90] Mosher, C. (ed.), "The POSTGRES Ref- is constructing an advanced application development erence Manual, Version 2," University of tool kit for use with POSTGRES and other DBMSs. California, Electronics Research Labora- Third, there is the XPRS project (eXtended Postgres on tory, Technical Report UCB/ERL Raid and Sprite) which is led by four Berkeley faculty, M90-60, Berkeley, Ca., August 1990. Randy Katz, John Ousterhout, Dave Patterson and Michael Stonebraker. XPRS is exploring high perfor- [STON86] Stonebraker M. and Rowe, L., "The mance I/O systems and their efficient utilization by oper- Design of POSTGRES," Proc. 1986 ating systems and data managers. The last project is one ACM-SIGMOD Conference on Manage- aimed at improving the reliability of DBMS software by ment of Data, Washington, D.C., June dealing more effectively with software errors that is also 1986. led by Michael Stonebraker. [STON90] Stonebraker, M. et. al., "The Implementa- In this short paper we summarize work on POST- tion of POSTGRES," IEEE Transactions GRES, PICASSO, and software reliability and report on on Knowledge and Data Engineering, the XPRS work that is relevant to data management.
    [Show full text]
  • Arxiv:2106.11534V1 [Cs.DL] 22 Jun 2021 2 Nanjing University of Science and Technology, Nanjing, China 3 University of Southampton, Southampton, U.K
    Noname manuscript No. (will be inserted by the editor) Turing Award elites revisited: patterns of productivity, collaboration, authorship and impact Yinyu Jin1 · Sha Yuan1∗ · Zhou Shao2, 4 · Wendy Hall3 · Jie Tang4 Received: date / Accepted: date Abstract The Turing Award is recognized as the most influential and presti- gious award in the field of computer science(CS). With the rise of the science of science (SciSci), a large amount of bibliographic data has been analyzed in an attempt to understand the hidden mechanism of scientific evolution. These include the analysis of the Nobel Prize, including physics, chemistry, medicine, etc. In this article, we extract and analyze the data of 72 Turing Award lau- reates from the complete bibliographic data, fill the gap in the lack of Turing Award analysis, and discover the development characteristics of computer sci- ence as an independent discipline. First, we show most Turing Award laureates have long-term and high-quality educational backgrounds, and more than 61% of them have a degree in mathematics, which indicates that mathematics has played a significant role in the development of computer science. Secondly, the data shows that not all scholars have high productivity and high h-index; that is, the number of publications and h-index is not the leading indicator for evaluating the Turing Award. Third, the average age of awardees has increased from 40 to around 70 in recent years. This may be because new breakthroughs take longer, and some new technologies need time to prove their influence. Besides, we have also found that in the past ten years, international collabo- ration has experienced explosive growth, showing a new paradigm in the form of collaboration.
    [Show full text]
  • Looking Back at Postgres
    Looking Back at Postgres Joseph M. Hellerstein [email protected] ABSTRACT etc.""efficient spatial searching" "complex integrity constraints" and This is a recollection of the UC Berkeley Postgres project, which "design hierarchies and multiple representations" of the same phys- was led by Mike Stonebraker from the mid-1980’s to the mid-1990’s. ical constructions [SRG83]. Based on motivations such as these, the The article was solicited for Stonebraker’s Turing Award book[Bro19], group started work on indexing (including Guttman’s influential as one of many personal/historical recollections. As a result it fo- R-trees for spatial indexing [Gut84], and on adding Abstract Data cuses on Stonebraker’s design ideas and leadership. But Stonebraker Types (ADTs) to a relational database system. ADTs were a pop- was never a coder, and he stayed out of the way of his development ular new programming language construct at the time, pioneered team. The Postgres codebase was the work of a team of brilliant stu- by subsequent Turing Award winner Barbara Liskov and explored dents and the occasional university “staff programmers” who had in database application programming by Stonebraker’s new collab- little more experience (and only slightly more compensation) than orator, Larry Rowe. In a paper in SIGMOD Record in 1983 [OFS83], the students. I was lucky to join that team as a student during the Stonebraker and students James Ong and Dennis Fogg describe an latter years of the project. I got helpful input on this writeup from exploration of this idea as an extension to Ingres called ADT-Ingres, some of the more senior students on the project, but any errors or which included many of the representational ideas that were ex- omissions are mine.
    [Show full text]
  • Operating Systems and Middleware: Supporting Controlled Interaction
    Operating Systems and Middleware: Supporting Controlled Interaction Max Hailperin Gustavus Adolphus College Revised Edition 1.1 July 27, 2011 Copyright c 2011 by Max Hailperin. This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Bibliography [1] Atul Adya, Barbara Liskov, and Patrick E. O’Neil. Generalized iso- lation level definitions. In Proceedings of the 16th International Con- ference on Data Engineering, pages 67–78. IEEE Computer Society, 2000. [2] Alfred V. Aho, Peter J. Denning, and Jeffrey D. Ullman. Principles of optimal page replacement. Journal of the ACM, 18(1):80–93, 1971. [3] AMD. AMD64 Architecture Programmer’s Manual Volume 2: System Programming, 3.09 edition, September 2003. Publication 24593. [4] Dave Anderson. You don’t know jack about disks. Queue, 1(4):20–30, 2003. [5] Dave Anderson, Jim Dykes, and Erik Riedel. More than an interface— SCSI vs. ATA. In Proceedings of the 2nd Annual Conference on File and Storage Technology (FAST). USENIX, March 2003. [6] Ross Anderson. Security Engineering: A Guide to Building Depend- able Distributed Systems. Wiley, 2nd edition, 2008. [7] Apple Computer, Inc. Kernel Programming, 2003. Inside Mac OS X. [8] Apple Computer, Inc. HFS Plus volume format. Technical Note TN1150, Apple Computer, Inc., March 2004. [9] Ozalp Babaoglu and William Joy. Converting a swap-based system to do paging in an architecture lacking page-referenced bits.
    [Show full text]