Nosql: the New Normal

Total Page:16

File Type:pdf, Size:1020Kb

Nosql: the New Normal NoSQL: The New Normal Matt Asay (@mjasay) VP, Corporate Strategy, 10gen 10gen Overview 180+ employees 500+ customers Offices in New York, Palo Alto, Washington Over $81 million in funding DC, London, Dublin, Barcelona and Sydney 2 Global Community 3,800,000+ MongoDB Downloads 47,000+ Online Education Registrants 15,000+ MongoDB User Group Members 14,000+ MongoDB Monitoring Service Users (MMS) 10,000+ Annual MongoDB Days Attendees 3 the past… is prologue? Back to the Future? What has been will be again, what has been done will be done again; there is nothing new under the sun. (Ecclesiastes 1:9, NIV) 5 What Do You Remember about 1969? 6 nothing? hold that thought 7 Back to the Future: PreSQL • IBM’s IMS (1969) – Developed as part of the Apollo Project • IDS (Integrated Data Store), navigational database, 1973 • High performance but: – Forced developers to worry about both query design and schema design upfront – Made it hard to change anything mid-stream 8 Enter SQL • Designed to overcome “PreSQL” deficiencies – Decoupled query design from schema design – Allowed developers to focus on schema design – Could be confident that you could query the data as you wanted later • 30 years of dominance later… 9 …the present… RDBMS Is Like a Spreadsheet 11 With “Relations” Between Rows 12 Lots of relations. Lots of rows. 13 It Hides What You’re Really Doing 14 It Makes Development Hard Code XML Config DB Schema Object Relational Application Relational Mapping Database 15 And Makes Things Hard to Change New Table New Column New Table Name Pet Phone Email New Column 3 months later… 16 RDBMS Scale = Bigger Computers “Clients can also opt to run zEC12 without a raised datacenter floor -- a first for high-end IBM mainframes.” IBM Press Release 28 Aug, 2012 17 This Was a Problem for Google 2010 Search Index Size: 100,000,000 GB New data added per day 100,000+ GB Databases they could use 0 == 4.1miles 250,000+ MBP’s 18 Source: http://googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html And for Facebook 2010: 13,000,000 queries per second 19 And for Facebook 2010: 13,000,000 queries per second TPC #1 DB: 504161 tps TPC Top Results 20 And for Facebook 2010: 13,000,000 queries per second TPC #1 DB: 504161 tps TPC Top Results Top 10 combined: 1,370,368 tps 21 Big Data Changes Everything… for Everyone Variety of Data Agile Development • Unstructured data • Iterative • Semi-structured • Short development data cycles • Polymorphic data • New workloads Volume/Velocity of Data New Architectures • Petabytes of data • Horizontal scaling • Trillions of records • Commodity servers • Millions of queries per • Cloud computing second 22 Shift in What We’re Computing 23 Living in the Post-transactional Future Order-processing systems largely “done” (RDBMS); primary focus on better search and recommendations or adapting prices on the fly (NoSQL) Vast majority of its engineering is focused on recommending better movies (NoSQL), not processing monthly bills (RDBMS) Easy part is processing the credit card (RDBMS). Hard part is making it location aware, so it knows where you are and what you’re buying (NoSQL) 24 Shift in How We Develop Applications “Systems of Engagement are built by front-line developers using modern languages who are driven by time to market, the need for rapid deployment and iteration….They value solutions that make it easy for them to deploy their application code with as little friction as possible.” (Forrester 2013) 25 Why NoSQL? Agile Scalable Lower TCO 27 what does this mean? Developers Are More Productive Code XML Config DB Schema Object Relational Application Relational Mapping Database 29 Developers Are More Productive Code XML Config DB Schema Object Relational Application Relational Mapping Database 30 Developers? They Matter 31 some nosql examples Agility RDBMS MongoDB { _id : ObjectId("4c4ba5e5e8aabf3"), employee_name: "Dunham, Justin", department : "Marketing", title : "Product Manager, Web", report_up: "Neray, Graham", pay_band: “C", benefits : [ { type : "Health", plan : "PPO Plus" }, { type : "Dental", plan : "Standard" } ] } 33 Example Serves targeted content to users using MongoDB- powered identity system Problem Why MongoDB Results • 20M+ unique visitors • Easy-to-manage • Rapid rollout of new per month dynamic data model features enables limitless • Rigid relational schema growth, interactive • Customized, social unable to evolve with content conversations changing data types throughout site and new features • Support for ad hoc • Tracks user data to queries • Slow development increase engagement, cycles • Highly extensible revenue 34 Scalability Auto-Sharding • Increase capacity as you go • Commodity and cloud architectures • Improved operational simplicity and cost visibility 35 Case Study Manages a wide range of content and services for its web properties using MongoDB Problem Why MongoDB Results • Trouble dealing with a • Move from 6 billion rows • Significant performance huge variety of content in RDBMS to simplicity gains despite big • MySQL unable to keep of 1 document increase in volume and variety of data up with performance • Automated failover and and scalability requirements ability to add nodes • Greater agility, faster without downtime development iteration • Problems compounded by integrating • “Blazingly fast” query • Saved £2m in licenses information from T- performance: “blown and hardware Mobile joint venture away by [MongoDB’s] performance” 36 Better Total Cost of Ownership (TCO) Developer/Ops Savings • Ease of Use • Agile development • Less maintenance Hardware Savings • Commodity servers • Internal storage (no SAN) • Scale out, not up Software/Support Savings DB Alternative • No upfront license • Cost visibility for usage growth 37 Case Study Stores one of world’s largest record repositories and searchable catalogues in MongoDB Problem Why MongoDB Results • One of world’s largest • Fast, easy scalability • Will scale to 100s of TB record repositories by 2013, PB by 2020 • Full query language • Move to SOA required • Searchable catalogue • Complex metadata new approach to data of varied data types store storage • Decreased SW and • Delivers high scalability, • RDBMS could not support costs support centralized data fast performance, and mgt and federation of easy maintenance, while information services keeping support costs low 38 Performance Better Data In-Memory In-Place Locality Caching Updates 39 Case Study Uses MongoDB to safeguard over 6 billion images served to millions of customers Problem Why MongoDB Results • 6B images, 20TB of • JSON-based data • 5x cost reduction data model • 9x performance • Brittle code base on top • Agile, high improvement of Oracle database – performance, scalable hard to scale, add • Faster time-to-market • Alignment with features • Dev cycles in weeks vs. Shutterfly’s services- • High SW and HW costs based architecture tens of months 40 …the future Leading Organizations Rely on NoSQL 42 NoSQL Adoption NoSQL First Policy MultipleNoSQL NoSQLCentre of ProjectsExcellence Multiple NoSQL First Projects NoSQL Project 43 NoSQL: The New Normal Document Store Meets Requirements RDBMSs Meet Requirements Key/Value or Column Stores Meet Requirements 44 Is It a Polyglot Future? 45 Remember the Long Tail? 46 It Didn’t Work out Well 47 General Purpose, High Performance Database Popularity Jobs, Searches, Mentions, Etc. Rank Database Database Type Score % Change 1 Oracle RDBMS 1567.93 + 8.6 Microsoft SQL 2 Server RDBMS 1310.61 - 0.73 3 MySQL RDBMS 1284.78 - 28.89 4 Microsoft Access RDBMS 186.05 + 15.12 5 PostgreSQL RDBMS 183.47 + 16 6 DB2 RDBMS 162.05 + 8.39 7 MongoDB Document store 116.14 + 20.01 8 Sybase RDBMS 84.52 + 1.07 9 SQLite RDBMS 81.01 + 0.65 10 Solr Search engine 47.35 Wide column 11 Cassandra store 36.16 + 1.24 12 Redis Key-value store 32.03 + 6.06 13 Informix RDBMS 25.43 + 1.52 14 Memcached Key-value store 23.28 - 1.83 Wide column 15 Hbase store 20.74 + 1.05 Source: DBEngines, Feb 2013 48 NoSQL Benefits Increased Developer Productivity Better User Experience Faster Time to Market Lower TCO 49 Leading NoSQL Database Google Search LinkedIn Job Skills MongoDB Competitor 1 Competitor 2 Competitor 3 Competitor 4 Competitor 5 MongoDB Competitor 2 Competitor 4 All Others Competitor 1 Competitor 3 Jaspersoft Big Data Index Indeed.com Trends Direct Real-Time Downloads Top Job Trends 1. HTML 5 MongoDB 2. MongoDB Competitor 1 3. iOS 4. Android Competitor 2 5. Mobile Apps 6. Puppet Competitor 3 7. Hadoop 8. jQuery 9. PaaS 51 10. Social Media .
Recommended publications
  • Demonstrate an Understanding of Computer Database Management Systems 114049
    Demonstrate an understanding of Computer Database Management Systems 114049 PURPOSE OF THE UNIT STANDARD This unit standard is intended: To provide a fundamental knowledge of the areas covered For those working in, or entering the workplace in the area of Information Technology As additional knowledge for those wanting to understand the areas covered People credited with this unit standard are able to: Describe data management issues and how it is addressed by a DBMS. Describe commonly implemented features of commercial computer DBMS`s Describe different type of DBMS`s Review DBMS end-user tools The performance of all elements is to a standard that allows for further learning in this area. LEARNING ASSUMED TO BE IN PLACE AND RECOGNITION OF PRIOR LEARNING The credit value of this unit is based on a person having prior knowledge and skills to: Demonstrate an understanding of fundamental mathematics (at least NQF level 3). Demonstrate PC competency skills (End-User Computing unit Standards, at least up to NQF level 3) NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 5 V-1 PAGE 50 INDEX Competence Requirements Page Unit Standard 114049 alignment index Here you will find the different outcomes explained which you need to be 52 proved competent in, in order to complete the Unit Standard 114049. Unit Standard 114049 54 Describe data management issues 57 Commonly implemented features of commercial database management systems 63 Different types of DBMS’s 67 Review DBMS end-user tools 73 Self-assessment Once you have completed all the questions after being facilitated, you need to check the progress you have made.
    [Show full text]
  • L21: Data Models & Relational Algebra
    L21: Data models & Relational Algebra CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 1 Announcements! • Exam 2 comments - Based on feedback: Separate "select all" and negative points • We may have "select all" again: but minimum points is 0 • We may have negative points again: but for MCQ only (select one single answer) • Please don't yet submit HW7 - We are adopting Gradescope to handle your submissions too! • Today - Exam 2 take-aways - Data models - Relational Algebra 2 Schedule 3 From 1 = Q3 to 11 = Q13 (thus add 2 to the number) 4 5 Grading from the "other side" 6 Q6 7 Q6 8 Q12 Answer: only the first! Also see post on Piazza: https://piazza.com/class/jj55fszwtpj7fx?cid=135 9 10 L21: A short history of data models Based on article "What goes around comes around", Hellerstein, Stonebraker, 2005. Several slides courtesy of Dan Suciu CS3200 Database design (fa18 s2) https://northeastern-datalab.github.io/cs3200/ Version 11/26/2018 11 Hierarchical data Source: https://en.wikipedia.org/wiki/Nested_set_model 12 Hierarchies are powerful, but can be misleading... 13 “Data Model” • Applications need to model real-world data - Typically includes entities and relationships between them - Entities: e.g. students, courses, products, clients - Relationships: e.g. course registrations, product purchases • A data model enables a user to define the data using high-level constructs without worrying about many low-level details of how data will be stored on disk 14 Levels of Abstraction schema as seen
    [Show full text]
  • What Is Database? Types and Examples
    What is database? Types and Examples Visit our site for more information: www.examplanning.com Facebook Page: https://www.facebook.com/examplanning10/ Twitter: https://twitter.com/examplanning10 TABLE OF CONTENTS Sr. Description 1 What is database? 2 Different definitions of database 3 Growth of Database 4 Elements of Database 5 Components of database 6 Database System Environment 7 Types of Databas 8 Characteristics of database 9 Advantages of Database 10 Disadvantages of Database What is Database? A database is a collection of information or data which are organized in such a way that it can be easily accessed, managed and retrieved. Database is abbreviated ad DB. Different definitions of database. “a usually large collection of data organized especially for rapid search and retrieval (as by a computer) an online database” (merriam-webster) “a comprehensive collection of related data organized for convenient access, generally in a computer.” (dictionary) A database is an organized collection of data. (Wikipedia) What is data? It is used as both singular and plural form. It can be a quantity, symbol or character on which operations are performed. Data is information which are converted into digital form. Growth of Database Database was evolved in 1960's started with the hierarchical database. Relational database was invented by EF Codd in 1970s while object oriented database was invented in 1980s. In 1990s object oriented database rose with the growth of object oriented programming languages. Nowadays, databases with SQL and NoSQL are popular. Elements of Database Database elements are fields, rows, columns, tables. All these are building blocks of database.
    [Show full text]
  • Graph-Based Source Code Analysis of Dynamically Typed Languages
    Budapest University of Technology and Economics Faculty of Electrical Engineering and Informatics Department of Measurement and Information Systems Graph-Based Source Code Analysis of Dynamically Typed Languages Scientific Students’ Association Report Author: Dániel Stein Supervisors: Gábor Szárnyas Ádám Lippai Dávid Honfi 2016 ii Contents Contents ii Kivonat v Abstract vi 1 Introduction1 1.1 Context.......................................... 1 1.2 Problem Statement................................... 2 1.3 Objectives and Contributions............................. 2 1.4 Structure of the Report................................. 3 2 Preliminaries4 2.1 JavaScript......................................... 4 2.1.1 From Glue Language to a Full-Fledged Language............ 4 2.1.2 ECMAScript .................................. 4 2.2 Static Analysis...................................... 5 2.2.1 Use Cases.................................... 6 2.2.2 Advantages and Disadvantages....................... 6 2.2.3 Source Code Processing and Analysis................... 7 2.3 Handling Large Interconnected Data ........................ 12 2.3.1 On Graph Computing............................. 12 2.3.2 Evaluating Queries on a Data Structure.................. 14 2.3.3 Graph Databases ............................... 15 2.4 Integrated Development Environment (IDE).................... 17 2.4.1 Visual Studio Code .............................. 17 2.4.2 Alternative IDEs................................ 18 3 Related Work 19 3.1 Tern...........................................
    [Show full text]
  • Database History
    History of Database Systems JIANNAN WANG SIMON FRASER UNIVERSITY JIANNAN @ SFU 1 Database Systems in a Half Century (1960s – 2010s) When What Early 1960 – Early 1970 The Navigational Database Empire Mid 1970 – Mid 1980 The Database World War I Mid 1980 – Early 2000 The Relational Database Empire Mid 2000 – Now The Database World War II References. • https://en.wikipedia.org/wiki/Database#History • What Goes Around Comes Around (Michael Stonebraker, Joe Hellerstein) • 40 Years VLDB Panel JIANNAN @ SFU 2 The Navigational Database Empire (Early1960 – Early 1970) Data Model 1. How to organize data 2. How to access data Navigational Data Model 1. Organize data into a multi-dimensional space (i.e., A space of records) 2. Access data by following pointers between records Inventor: Charles Bachman 1. The 1973 ACM Turing Award 2. Turing Lecture: “The Programmer As Navigator” JIANNAN @ SFU 3 The Navigational Database Empire (Early1960 – Early 1970) Representative Navigational Database Systems ◦ Integrated Data Store (IDS), 1964, GE ◦ Information Management System (IMS), 1966, IBM ◦ Integrated Database Management System (IDMS), 1973, Goodrich CODASYL ◦ Short for “Conference/Committee on Data Systems Languages” ◦ Define navigational data model as standard database interface (1969) JIANNAN @ SFU 4 The Birth of Relational Model Ted Codd ◦ Born in 1923 ◦ PHD in 1965 ◦ “A Relational Model of Data for Large Shared Data Banks” in 1970 Relational Model ◦ Organize data into a collection of relations ◦ Access data by a declarative language (i.e., tell me what you want, not how to find it) Data Independence JIANNAN @ SFU 5 The Database World War I: Background One Slide (Navigational Model) ◦ Led by Charles Bachman (1973 ACM Turing Award) ◦ Has built mature systems ◦ Dominated the database market The other Slide (Relational Model) ◦ Led by Ted Codd (mathematical programmer, IBM) ◦ A theoretical paper with no system built ◦ Little support from IBM JIANNAN @ SFU 6 The Database World War I: Three Big Campaigns 1.
    [Show full text]
  • Databases Theoretical Introduction Contents
    Databases Theoretical Introduction Contents 1 Databases 1 1.1 Database ................................................ 1 1.1.1 Terminology and overview .................................. 1 1.1.2 Applications .......................................... 2 1.1.3 General-purpose and special-purpose DBMSs ........................ 2 1.1.4 History ............................................ 2 1.1.5 Research ........................................... 6 1.1.6 Examples ........................................... 6 1.1.7 Design and modeling ..................................... 7 1.1.8 Languages ........................................... 9 1.1.9 Performance, security, and availability ............................ 10 1.1.10 See also ............................................ 12 1.1.11 References .......................................... 12 1.1.12 Further reading ........................................ 13 1.1.13 External links ......................................... 14 1.2 Schema migration ........................................... 14 1.2.1 Risks and Benefits ...................................... 14 1.2.2 Schema migration in agile software development ...................... 14 1.2.3 Available Tools ........................................ 15 1.2.4 References .......................................... 15 1.3 Star schema .............................................. 16 1.3.1 Model ............................................ 16 1.3.2 Benefits ............................................ 16 1.3.3 Disadvantages .......................................
    [Show full text]
  • Graph Databases
    Learning Neo4j Run blazingly fast queries on complex graph datasets with the power of the Neo4j graph database Rik Van Bruggen BIRMINGHAM - MUMBAI Learning Neo4j Copyright © 2014 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: August 2014 Production reference: 1190814 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-84951-716-4 www.packtpub.com Cover image by Pratyush Mohanta ([email protected]) Credits Author Project Coordinator Rik Van Bruggen Mary Alex Reviewers Proofreaders Jussi Heinonen Simran Bhogal Michael Hunger Maria Gould Andreas Kolleger Ameesha Green Max De Marzi Paul Hindle Mark Needham Indexers
    [Show full text]
  • Thomas Schwarz SJ
    No SQL Databases Thomas Schwarz, SJ Relational Model Shortcomings • Greater Scalability • High write throughput / very large datasets • Independence from few vendors — Move towards Open Source • Need for different query operations • Restrictiveness of relational schemas Data at Very Large Scale • Hush: HBase URL Shortener • Hand a URL to a Shortener service • Get a shorter URL back • E.g. to use in twitter messages • Shortener provides counter for each shortened URLs • "Vanity URL" that incorporate specific domain names • Need to maintain users • log in to create short URLs • track existing URLs • see reports for daily, weekly, or monthly usage Data at Very Large Scale • Data is too large to store at a single server • Notice: • Limited need for transactions • Importance of high throughput writes and reads Data at Very Large Scale • Columnar Layout • A relational database strategy often adopted in No-SQL databases • Instead of storing data in tuples • Store by attribute Data at Very Large Scale • For large HUSH: • Can use a relational database • Use normalization and obtain a scheme Data at Very Large Scale • Principles of Denormalization, Duplication, Intelligent Keys • Denormalize by duplicating data in more than one table • Avoids aggregation at read time • Prematerialize required views Data at Very Large Scale • Example: HBase URL Shortener (Hush) • user(id, username, credentials, rules, first_name, last_name, email) with unique username constraint • url(id, url, refShortID, title, description, content) • shorturl(id, userID, urlID, shortID,
    [Show full text]
  • Query Analysis on a Distributed Graph Database Student: Bc
    ASSIGNMENT OF MASTER’S THESIS Title: Query Analysis on a Distributed Graph Database Student: Bc. Lucie Svitáková Supervisor: Ing. Michal Valenta, Ph.D. Study Programme: Informatics Study Branch: Web and Software Engineering Department: Department of Software Engineering Validity: Until the end of summer semester 2019/20 Instructions 1. Research current practicies of data storage in distributed graph databases. 2. Choose an existing database system for your implementation. 3. Analyse how to extract necessary information for further query analysis and implement that extraction. 4. Create a suitable method to store the data extracted in step 3. 5. Propose a method for a more efficient redistribution of the data. References Will be provided by the supervisor. Ing. Michal Valenta, Ph.D. doc. RNDr. Ing. Marcel Jiřina, Ph.D. Head of Department Dean Prague October 5, 2018 Master’s thesis Query Analysis on a Distributed Graph Database Bc. et Bc. Lucie Svit´akov´a Department of Software Engineering Supervisor: Ing. Michal Valenta, Ph.D. January 8, 2019 Acknowledgements I would like to thank my supervisor Michal Valenta for his valuable suggestions and observations, my sister S´arkaSvit´akov´aforˇ spelling corrections, my friends Martin Horejˇsand Tom´aˇsNov´aˇcekfor their valuable comments and my family for their continual motivation and support. Declaration I hereby declare that the presented thesis is my own work and that I have cited all sources of information in accordance with the Guideline for adhering to ethical principles when elaborating an academic final thesis. I acknowledge that my thesis is subject to the rights and obligations stip- ulated by the Act No.
    [Show full text]
  • The Neo Database – a Technology Introduction (2006­11­23)
    The Neo Database ± A Technology Introduction (2006-11-23) The Neo Database This document is an introduction to the Neo database, a next-generation database which we believe addresses some of the shortcomings of the data management systems on the market today. The target audience is a technologically inclined person with a background in software development. This document gives an overview of what Neo does and the benefits and drawbacks of using Neo in your application development. It will also try to give a glimpse of how Neo works and how developing with Neo is different from developing with a relational database. In a nut-shell, Neo is a robust, scalable, high-performance and embedded persistence engine for efficient management of semi-structured and network-oriented data. That©s a lot of buzzwords! Let©s have a closer look: • Neo is a database designed for network-oriented data. This is data that is ordered in complex networks or deep trees. Where the relational model is based on tables, columns and rows, Neo©s primitives are nodes, relationships and properties. Together, these form a large network of information that we call a node space. • Additionally, Neo shines at handling semi-structured data. Semi-structured data is a research term that is quickly gaining ground outside of academia. Simply put, semi-structured data typically has few mandatory but many optional attributes. As a consequence, it usually has a very dynamic structure, sometimes to the point where it varies even between every single element. Data with that degree of variance is difficult to fit in a relational database schema but can be easily represented in the Neo model.
    [Show full text]
  • Comparative Analysis of Legacy Database Systems P a G E | 1
    Comparative Analysis of Legacy Database Systems P a g e | 1 Legacy Database Systems What is a Database? Data is collection of facts and figures which can be processed to produce information whereas a database is the collection of correlated data. For Example, Name of a student, age, class and the subjects can be counted as data for recording purposes. And the names, telephone numbers, and addresses of the people which have been recorded in an indexed address book or stored on a portable hard drive using software such as Microsoft Access or Excel. This collection of related data having an absolute meaning is a database. Database Managеmеnt Systеm: Database Management System or DBMS refers to the mechanism of storing and reclaiming users’ data with uttermost efficiency along with the security attributes. This type of system allows its users to create their own databases which are apropos to the nature of work they desire. Such database systems are highly customizable and proffers bunch of options. The DBMS is therefore such a general purpose system software that facilitates the processes of defining, constructing, manipulating, and sharing of the databases among diverse users and applications. History of Database: The 1960s was the beginning of database management systems (DBMS) when vendors began marketing computerized logistics technologies for manufacturing and wider laboratory use. Intrusion detection system or IDS was one such archaic network databases that was developed by the computer division known as General Electric. Another database was IMS or Information Management System developed by IBM. This was a navigational database. It used the navigational approach in which the navigation method was used as an approach to find information about different various data.
    [Show full text]
  • Overview of Database Evolution and Elevation
    Vol-6 Issue-2 2020 IJARIIE-ISSN(O)-2395-4396 OVERVIEW OF DATABASE EVOLUTION AND ELEVATION Dr. A. R. Revathi1, R. Bhuvaneswaran2, K. Mohamed Arshath2, U. Hari Prakash2, M. Loga Aravinth2 1Associate Professor, Department of IT, SRM Valliammai Engineering College, Tamil Nadu, India. 2 Student, Department of IT, SRM Valliammai Engineering College, Tamil Nadu, India. ABSTRACT “INFORMATION IS THE OIL OF THE 21st CENTURY, AND ANALYTICS IS THE COMBUSTION ENGINE. “ -PETER SONDERGAARD Nowadays, databases and managing databases is essential because as most of the applications deals with an integral part of user related data. For an organization database must be very crucial. As it deals with user current or stored records either structurally or unstructurally in order to retrieve the required records. Thus the need for a database is to manipulate data, store and identify data and finally to organize data in two ways namely schema based and schema less. This paper illustrates the need of database technology and its properties along with a transition path towards the track to origin of database with advantages and disadvantages, development and step forward to view the upcoming database with New Structured Query Language. Keyword: Database, Structured and unstructured data, Schema, New Structured Query Language. 1. INTRODUCTION “A database is a self-describing collection of integrated records. A record is a representation of some physical or conceptual object. A database is self-describing in that it contains a description of its own structure. This description is called metadata - data about the data. The database is integrated in that it includes the relationships among data items, as well as including the data items themselves” [1].
    [Show full text]