Making Databases Work: the Pragmatic Wisdom of Michael Stonebraker Editor: Michael L
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Data Warehouse Fundamentals for Storage Professionals – What You Need to Know EMC Proven Professional Knowledge Sharing 2011
Data Warehouse Fundamentals for Storage Professionals – What You Need To Know EMC Proven Professional Knowledge Sharing 2011 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents Introduction ................................................................................................................................ 3 Data Warehouse Background .................................................................................................... 4 What Is a Data Warehouse? ................................................................................................... 4 Data Mart Defined .................................................................................................................. 8 Schemas and Data Models ..................................................................................................... 9 Data Warehouse Design – Top Down or Bottom Up? ............................................................10 Extract, Transformation and Loading (ETL) ...........................................................................11 Why You Build a Data Warehouse: Business Intelligence .....................................................13 Technology to the Rescue?.......................................................................................................19 RASP - Reliability, Availability, Scalability and Performance ..................................................20 Data Warehouse Backups .....................................................................................................26 -
Evolution of Database Technology
Introduction and Database Technology By EM Bakker September 11, 2012 Databases and Data Mining 1 DBDM Introduction Databases and Data Mining Projects at LIACS Biological and Medical Databases and Data Mining CMSB (Phenotype Genotype), DIAL CGH DB Cyttron: Visualization of the Cell GRID Computing VLe: Virtual Lab e-Science environments DAS3/DAS4 super computer Research on Fundamentals of Databases and Data Mining Database integration Data Mining algorithms Content Based Retrieval September 11, 2012 Databases and Data Mining 2 DBDM Databases (Chapters 1-7): The Evolution of Database Technology Data Preprocessing Data Warehouse (OLAP) & Data Cubes Data Cubes Computation Grand Challenges and State of the Art September 11, 2012 Databases and Data Mining 3 DBDM Data Mining (Chapters 8-11): Introduction and Overview of Data Mining Data Mining Basic Algorithms Mining data streams Mining Sequence Patterns Graph Mining September 11, 2012 Databases and Data Mining 4 DBDM Further Topics Mining object, spatial, multimedia, text and Web data Mining complex data objects Spatial and spatiotemporal data mining Multimedia data mining Text mining Web mining Applications and trends of data mining Mining business & biological data Visual data mining Data mining and society: Privacy-preserving data mining September 11, 2012 Databases and Data Mining 5 [R] Evolution of Database Technology September 11, 2012 Databases and Data Mining 6 Evolution of Database Technology 1960s: (Electronic) Data collection, database creation, IMS -
Ingres 9.3 Migration Guide
Ingres® 9.3 Migration Guide ING-93-MG-02 This Documentation is for the end user's informational purposes only and may be subject to change or withdrawal by Ingres Corporation ("Ingres") at any time. This Documentation is the proprietary information of Ingres and is protected by the copyright laws of the United States and international treaties. It is not distributed under a GPL license. You may make printed or electronic copies of this Documentation provided that such copies are for your own internal use and all Ingres copyright notices and legends are affixed to each reproduced copy. You may publish or distribute this document, in whole or in part, so long as the document remains unchanged and is disseminated with the applicable Ingres software. Any such publication or distribution must be in the same manner and medium as that used by Ingres, e.g., electronic download via website with the software or on a CD- ROM. Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Ingres. To the extent permitted by applicable law, INGRES PROVIDES THIS DOCUMENTATION "AS IS" WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IN NO EVENT WILL INGRES BE LIABLE TO THE END USER OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USER OF THIS DOCUMENTATION, INCLUDING WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL, OR LOST DATA, EVEN IF INGRES IS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE. -
Oracle Vs. Nosql Vs. Newsql Comparing Database Technology
Oracle vs. NoSQL vs. NewSQL Comparing Database Technology John Ryan Senior Solution Architect, Snowflake Computing Table of Contents The World has Changed . 1 What’s Changed? . 2 What’s the Problem? . .. 3 Performance vs. Availability and Durability . 3 Consistecy vs. Availability . 4 Flexibility vs . Scalability . 5 ACID vs. Eventual Consistency . 6 The OLTP Database Reimagined . 7 Achieving the Impossible! . .. 8 NewSQL Database Technology . 9 VoltDB . 10 MemSQL . 11 Which Applications Need NewSQL Technology? . 12 Conclusion . 13 About the Author . 13 ii The World has Changed The world has changed massively in the past 20 years. Back in the year 2000, a few million users connected to the web using a 56k modem attached to a PC, and Amazon only sold books. Now billions of people are using to their smartphone or tablet 24x7 to buy just about everything, and they’re interacting with Facebook, Twitter and Instagram. The pace has been unstoppable . Expectations have also changed. If a web page doesn’t refresh within seconds we’re quickly frustrated, and go elsewhere. If a web site is down, we fear it’s the end of civilisation as we know it. If a major site is down, it makes global headlines. Instant gratification takes too long! — Ladawn Clare-Panton Aside: If you’re not a seasoned Database Architect, you may want to start with my previous articles on Scalability and Database Architecture. Oracle vs. NoSQL vs. NewSQL eBook 1 What’s Changed? The above leads to a few observations: • Scalability — With potentially explosive traffic growth, IT systems need to quickly grow to meet exponential numbers of transactions • High Availability — IT systems must run 24x7, and be resilient to failure. -
Demonstrate an Understanding of Computer Database Management Systems 114049
Demonstrate an understanding of Computer Database Management Systems 114049 PURPOSE OF THE UNIT STANDARD This unit standard is intended: To provide a fundamental knowledge of the areas covered For those working in, or entering the workplace in the area of Information Technology As additional knowledge for those wanting to understand the areas covered People credited with this unit standard are able to: Describe data management issues and how it is addressed by a DBMS. Describe commonly implemented features of commercial computer DBMS`s Describe different type of DBMS`s Review DBMS end-user tools The performance of all elements is to a standard that allows for further learning in this area. LEARNING ASSUMED TO BE IN PLACE AND RECOGNITION OF PRIOR LEARNING The credit value of this unit is based on a person having prior knowledge and skills to: Demonstrate an understanding of fundamental mathematics (at least NQF level 3). Demonstrate PC competency skills (End-User Computing unit Standards, at least up to NQF level 3) NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 5 V-1 PAGE 50 INDEX Competence Requirements Page Unit Standard 114049 alignment index Here you will find the different outcomes explained which you need to be 52 proved competent in, in order to complete the Unit Standard 114049. Unit Standard 114049 54 Describe data management issues 57 Commonly implemented features of commercial database management systems 63 Different types of DBMS’s 67 Review DBMS end-user tools 73 Self-assessment Once you have completed all the questions after being facilitated, you need to check the progress you have made. -
The Declarative Imperative Experiences and Conjectures in Distributed Logic
The Declarative Imperative Experiences and Conjectures in Distributed Logic Joseph M. Hellerstein University of California, Berkeley [email protected] ABSTRACT The juxtaposition of these trends presents stark alternatives. The rise of multicore processors and cloud computing is putting Will the forecasts of doom and gloom materialize in a storm enormous pressure on the software community to find solu- that drowns out progress in computing? Or is this the long- tions to the difficulty of parallel and distributed programming. delayed catharsis that will wash away today’s thicket of im- At the same time, there is more—and more varied—interest in perative languages, preparing the ground for a more fertile data-centric programming languages than at any time in com- declarative future? And what role might the database com- puting history, in part because these languages parallelize nat- munity play in shaping this future, having sowed the seeds of urally. This juxtaposition raises the possibility that the theory Datalog over the last quarter century? of declarative database query languages can provide a foun- Before addressing these issues directly, a few more words dation for the next generation of parallel and distributed pro- about both crisis and opportunity are in order. gramming languages. 1.1 Urgency: Parallelism In this paper I reflect on my group’s experience over seven years using Datalog extensions to build networking protocols I would be panicked if I were in industry. and distributed systems. Based on that experience, I present — John Hennessy, President, Stanford University [35] a number of theoretical conjectures that may both interest the database community, and clarify important practical issues in The need for parallelism is visible at micro and macro scales. -
L21: Data Models & Relational Algebra
L21: Data models & Relational Algebra CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 1 Announcements! • Exam 2 comments - Based on feedback: Separate "select all" and negative points • We may have "select all" again: but minimum points is 0 • We may have negative points again: but for MCQ only (select one single answer) • Please don't yet submit HW7 - We are adopting Gradescope to handle your submissions too! • Today - Exam 2 take-aways - Data models - Relational Algebra 2 Schedule 3 From 1 = Q3 to 11 = Q13 (thus add 2 to the number) 4 5 Grading from the "other side" 6 Q6 7 Q6 8 Q12 Answer: only the first! Also see post on Piazza: https://piazza.com/class/jj55fszwtpj7fx?cid=135 9 10 L21: A short history of data models Based on article "What goes around comes around", Hellerstein, Stonebraker, 2005. Several slides courtesy of Dan Suciu CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 11 Hierarchical data Source: https://en.wikipedia.org/wiki/Nested_set_model 12 Hierarchies are powerful, but can be misleading... 13 “Data Model” • Applications need to model real-world data - Typically includes entities and relationships between them - Entities: e.g. students, courses, products, clients - Relationships: e.g. course registrations, product purchases • A data model enables a user to define the data using high-level constructs without worrying about many low-level details of how data will be stored on disk 14 Levels of Abstraction schema as seen -
Twenty Years of Berkeley Unix : from AT&T-Owned to Freely
Twenty Years of Berkeley Unix : From AT&T-Owned to Freely Redistributable Marshall Kirk McKusick Early History Ken Thompson and Dennis Ritchie presented the first Unix paper at the Symposium on Operating Systems Principles at Purdue University in November 1973. Professor Bob Fabry, of the University of California at Berkeley, was in attendance and immediately became interested in obtaining a copy of the system to experiment with at Berkeley. At the time, Berkeley had only large mainframe computer systems doing batch processing, so the first order of business was to get a PDP-11/45 suitable for running with the then-current Version 4 of Unix. The Computer Science Department at Berkeley, together with the Mathematics Department and the Statistics Department, were able to jointly purchase a PDP-11/45. In January 1974, a Version 4 tape was delivered and Unix was installed by graduate student Keith Standiford. Although Ken Thompson at Purdue was not involved in the installation at Berkeley as he had been for most systems up to that time, his expertise was soon needed to determine the cause of several strange system crashes. Because Berkeley had only a 300-baud acoustic-coupled modem without auto answer capability, Thompson would call Standiford in the machine room and have him insert the phone into the modem; in this way Thompson was able to remotely debug crash dumps from New Jersey. Many of the crashes were caused by the disk controller's inability to reliably do overlapped seeks, contrary to the documentation. Berkeley's 11/45 was among the first systems that Thompson had encountered that had two disks on the same controller! Thompson's remote debugging was the first example of the cooperation that sprang up between Berkeley and Bell Labs. -
CMU 15-445/645 Database Systems (Fall 2018 :: Query Optimization
Query Optimization Lecture #13 Database Systems Andy Pavlo 15-445/15-645 Computer Science Fall 2018 AP Carnegie Mellon Univ. 2 ADMINISTRIVIA Mid-term Exam is on Wednesday October 17th → See mid-term exam guide for more info. Project #2 – Checkpoint #2 is due Friday October 19th @ 11:59pm. CMU 15-445/645 (Fall 2018) 4 QUERY OPTIMIZATION Remember that SQL is declarative. → User tells the DBMS what answer they want, not how to get the answer. There can be a big difference in performance based on plan is used: → See last week: 1.3 hours vs. 0.45 seconds CMU 15-445/645 (Fall 2018) 5 IBM SYSTEM R First implementation of a query optimizer. People argued that the DBMS could never choose a query plan better than what a human could write. A lot of the concepts from System R’s optimizer are still used today. CMU 15-445/645 (Fall 2018) 6 QUERY OPTIMIZATION Heuristics / Rules → Rewrite the query to remove stupid / inefficient things. → Does not require a cost model. Cost-based Search → Use a cost model to evaluate multiple equivalent plans and pick the one with the lowest cost. CMU 15-445/645 (Fall 2018) 7 QUERY PLANNING OVERVIEW System Catalog Cost SQL Query Model Abstract Syntax Annotated Annotated Tree AST AST Parser Binder Rewriter Optimizer (Optional) Name→Internal ID Query Plan CMU 15-445/645 (Fall 2018) 8 TODAY'S AGENDA Relational Algebra Equivalences Plan Cost Estimation Plan Enumeration Nested Sub-queries Mid-Term Review CMU 15-445/645 (Fall 2018) 9 RELATIONAL ALGEBRA EQUIVALENCES Two relational algebra expressions are equivalent if they generate the same set of tuples. -
Source Code Patterns of Cross Site Scripting in PHP Open Source Projects
Source Code Patterns of Cross Site Scripting in PHP Open Source Projects Felix Schuckert12, Max Hildner1, Basel Katt2 ,∗ and Hanno Langweg12 1 HTWG Konstanz, Department of Computer Science, Konstanz, Baden-W¨urttemberg, Germany [email protected] [email protected] [email protected] 2 Department of Information Security and Communication Technology, Faculty of Information Technology and Electrical Engineering, NTNU, Norwegian University of Science and Technology, Gjøvik, Norway [email protected] Abstract To get a better understanding of Cross Site Scripting vulnerabilities, we investigated 50 randomly selected CVE reports which are related to open source projects. The vulnerable and patched source code was manually reviewed to find out what kind of source code patterns were used. Source code pattern categories were found for sources, concatenations, sinks, HTML context and fixes. Our resulting categories are compared to categories from CWE. A source code sample which might have led developers to believe that the data was already sanitized is described in detail. For the different HTML context categories, the necessary Cross Site Scripting prevention mechanisms are described. 1 Introduction Cross Site Scripting (XSS) is on the fourth place in Common Weakness Enumeration (CWE) top 25 2011 [3] and on the seventh place in Open Wep Application Security Project (OWASP) top 10 2017 [4]. Accordingly, Cross Site Scripting is still a common issue in web security. To discover the reason why the same vulnerabilities are still occurring, we investigated the vulnerable and patched source code from open source projects. Similar methods, functions and operations are grouped together and are called source code patterns. -
Postgresql Flyer
PostgreSQL - English Usage Examples Further Information Development system PostgreSQL (2nd Edition), Korry Douglas, Sams Publishing, ISBN: 0672327562 A small system just for developing, running on any supported platform (Unix, Linux, Mac OS, Windows). Beginning Databases with PostgreSQL:From Novice to This system does not need much system resources. Professional, Second Edition, Neil Matthew, Apress, The result can be exported and used in the production ISBN: 1590594789 PostgreSQL system. PostgreSQL Developer's Handbook, Ewald Geschwinde, Sams Publishing, ISBN 0672322609 Beginning PHP and PostgreSQL 8, W. Jason Gilmore, Small to mid-level database server Apress, ISBN 1590595475 A small to mid-level database server has just small PHP and PostgreSQL Advanced Web Programming, hardware requirements. PostgreSQL is not running ex- Ewald Geschwinde and Robert Treat, Sams Publishing, clusive on this system but shares the resources with ISBN 0672323826 other services. A webserver (Blog, CMS) with a data- base backend is a good example. PostgreSQL homepage: www.postgresql.org pgAdmin III: http://www.pgadmin.org Large database server PgFoundry: http://pgfoundry.org phpPgAdmin: http://phppgadmin.sourceforge.net A large database server has extensive hardware re- PostGIS: postgis.refractions.net quirements and is usually dedicated to a single appli- cation or project. PostgreSQL can use the full power Slony: slony.info of the hardware without the need to share resources. PostgreSQL 8.3 What is PostgreSQL? PostgreSQL 8.3, released in early 2008, includes a record PostgreSQL is an object-relational database management number of new and improved features which will greatly system (ORDBMS). It is freely available and usable with- enhance PostgreSQL for application designers, database out licensing fee. -
Aware, Workstation-Based Distributed Database System
THE ARCHITECTURE OF AN AUTONOMIC, RESOURCE- AWARE, WORKSTATION-BASED DISTRIBUTED DATABASE SYSTEM Angus Macdonald PhD Thesis February 2012 Abstract Distributed software systems that are designed to run over workstation machines within organisations are termed workstation-based. Workstation-based systems are characterised by dynamically changing sets of machines that are used primarily for other, user-centric tasks. They must be able to adapt to and utilize spare capacity when and where it is available, and ensure that the non-availability of an individual machine does not affect the availability of the system. This thesis focuses on the requirements and design of a workstation-based database system, which is motivated by an analysis of existing database architectures that are typically run over static, specially provisioned sets of machines. A typical clustered database system — one that is run over a number of specially provisioned machines — executes queries interactively, returning a synchronous response to applications, with its data made durable and resilient to the failure of machines. There are no existing workstation-based databases. Furthermore, other workstation-based systems do not attempt to achieve the requirements of interactivity and durability, because they are typically used to execute asynchronous batch processing jobs that tolerate data loss — results can be re-computed. These systems use external servers to store the final results of computations rather than workstation machines. This thesis describes the design and implementation of a workstation-based database system and investigates its viability by evaluating its performance against existing clustered database systems and testing its availability during machine failures. ACKNOWLEDGEMENTS I’d like to thank my supervisors, Professor Alan Dearle and Dr Graham Kirby, for the opportunities, the support, and the education that they have given me.