Database Management System Market Dynamics Why Dbmss Have a Resurgent Role in the Application Platform Landscape

Total Page:16

File Type:pdf, Size:1020Kb

Database Management System Market Dynamics Why Dbmss Have a Resurgent Role in the Application Platform Landscape Database Management System Market Dynamics Why DBMSs have a resurgent role in the application platform landscape. by Peter O'Kelly October 15, 2004 Database management systems (DBMSs) have a fundamental role in application platforms, but there is currently a lot of market confusion about how, when, where, and why DBMSs should be used. This column provides an overview of DBMS trends and the reasons why DBMSs have a resurgent and expanding role in the broader application platform landscape. The next Trends & Analysis column will assess how Microsoft's SQL Server 2005 product family fits into both the emerging DBMS market landscape and Microsoft's overall Windows Server System strategy. A Brief History of DBMSs To establish context, it's useful to briefly review the fundamental DBMS value proposition. DBMSs are used to securely and robustly manage databases; databases are sets of data that capture descriptions of real-world things such as customers and product inventory. DBMSs embody sophisticated technology to efficiently and concurrently make data available to applications and users without compromising database integrity. The DBMS market has evolved through several generations, starting with hierarchical DBMS products such as IBM's IMS, which was introduced during the late 1960s. Network DBMS products (also known as CODASYL) came along next, with Cullinet's IDMS (today a Computer Associates "legacy" DBMS) serving as a leading example. Relational DBMSs entered the market approximately 25 years ago and, after a phase of "database wars" between relational and network DBMS products, have dominated the DBMS market for most of the past 20 years. IMS and IDMS are still used for legacy applications, but today the vast majority of database developers work with relational DBMSs (and hereafter "DBMS" refers to relational DBMS unless otherwise noted). DBMS evolution led to higher levels of abstraction for database developers and users. While hierarchical and network DBMSs presented models that mixed logical and physical (implementation-oriented) considerations, for example, developers and users working with relational DBMSs focus primarily on logical abstractions that more closely mirror the real world things and events described by the database; they aren't conceptually burdened with pointer chains, buffer pools, and other low-level details. As a result, DBMSs are conducive to improved 1 From www.ftponline.com/wss/2004_11/magazine/columns/trends/default_pf.aspx 13 April 2005 developer productivity as well as increased overall system security and robustness. Perennial DBMS Challenges DBMSs offer more secure, robust, and productive options for data management, so why isn't all data stored in DBMSs today? It's estimated that 70 percent or more of most organizational data isn't currently stored in DBMSs, but is instead scattered across file systems, e-mail messages, and assorted specialized content/document management systems. Several longstanding challenges have prevented broader DBMS applicability and, as we'll see momentarily, the advent of Web-centric applications was in some respects a further setback for DBMSs. One major DBMS challenge, historically, had to do with cost and complexity. DBMSs, especially for high-end systems, were expensive to license and maintain. They required DBMS-trained developers and administrators, and also required extensive fine-tuning, which in turn resulted in protracted application development and test cycles. DBMSs also have more demanding hardware requirements than simpler, "good enough," file system-based alternatives, and that was a key consideration during the 1980s and 1990s, when hardware was still expensive relative to today's market. Another set of challenges stemmed from incomplete standards and limited data models. ANSI SQL has made major advances since 1999, but the previous version of the standard, published in 1992, was incomplete and led to DBMS vendors implementing proprietary SQL extensions. Until recently, DBMSs have also entailed constraints in terms of data model expressiveness, with most DBMS products, for example, unable to handle multivalued columns or recursive queries. As a result, DBMSs were considered appropriate for text and number "crunching" but not for documents or other, more elaborate data types. For most developers, today's world of persistent data is divided among three domains, as suggested in Figure 1. Databases are structured sets of data, designed to be used by applications. Documents are designed for human comprehension and include sequence, hierarchy, and narrative dimensions that aren't present in databases. Objects are programming abstractions that combine structure and behavior in a model optimized for developer productivity. Historically, the three domains were addressed with three largely distinct tool sets. DBMSs served databases, content and document management systems were for documents, and object-oriented programming tools and application servers fit with objects naturally. At the peak of client/server wave during the early 1990s, some DBMS products expanded to address some object-related capabilities as well as traditional databases. Illustra was a leading example, building on the UC Berkeley Postgres project, which in turn followed the pioneering Ingres research also led by Michael Stonebraker at UC Berkeley. 2 From www.ftponline.com/wss/2004_11/magazine/columns/trends/default_pf.aspx 13 April 2005 Figure 1. Three Application Domains. Historically, developers have used different tools when working with databases, documents, and objects. Sybase, another DBMS pioneer during the 1980s, led with triggers and stored procedures, putting more application logic into the DBMS. Triggers and stored procedures were important innovations because they meant application logic as well as data benefited from fundamental DBMS capabilities, e.g., making it possible to define a procedure for determining customer credit rating once and then have the procedure consistently applied by all applications rather than having the procedure done in each application. Several object-oriented database (OODB) products were also introduced during this period, and many people expected there would be another wave of "database wars," this time with relational being displaced by object-oriented DBMSs. OODB products failed to expand beyond niche status, however, and some have recently been creatively recycled, as we'll see in a moment. The use of different tools for different data models produced what has been termed an "impedance mismatch," creating challenges for developers who need to work with multiple tools and models. It has been difficult to use SQL with object-oriented programming tools, for example. Relegated to a Reduced Role in the Rush to the Web A funny thing happened on the way to the Web: As the rush to commercial Web applications started during the mid-1990s, DBMSs, which had until that time been evolving to become the center of the client/server application platform, were relegated to a reduced role. As Web applications shifted developers' focus to HTML-based pages (documents), often drawing on data from disparate systems, the market shifted to the five-tiered model depicted in Figure 2. 3 From www.ftponline.com/wss/2004_11/magazine/columns/trends/default_pf.aspx 13 April 2005 Figure 2. An Application Server-Centric View of the Platform Stack. The Web application wave reduced the role for DBMSs, often shifting business logic to application servers and integration to integration brokers. The Web application-led transition produced some "conventional wisdom" that was perplexing for many DBMS-focused developers. It became a common practice to create super-user database identities and to optimize database connection pooling, for example, shifting identity, authentication, and authorization to the application (or application-server) level. This was often done for performance, as DBMS deployments that were designed to support tens or hundreds of concurrent users often couldn't scale to serve the exponentially larger user populations of successful Web applications. Another best-practice shift was a movement away from DBMS-managed triggers and stored procedures, with application/business logic migrating to the middle tier, in application servers. This was a pragmatic option when applications had to work with data from disparate data sources, but it meant moving application logic from DBMSs, where it was consistently used by all applications, to the middle tier where it was easier to (inadvertently or deliberately) circumvent. The Web application wave also resulted in extensive midtier data caching, a practice with serious implications for database integrity. In some respects, this shift was a step backward to earlier approaches, when separate transaction processing monitor and DBMS layers were widely used, except the application servers weren't as tightly integrated with DBMSs. 4 From www.ftponline.com/wss/2004_11/magazine/columns/trends/default_pf.aspx 13 April 2005 XML has also impacted DBMS usage patterns. Although XML is still relatively young (the W3C completed its XML 1.0 recommendation in February 1998), XML documents have rapidly grown into a pivotal role in interapplication data exchanges. Several XML database products have been introduced to address the need for more robust, secure, and scalable XML data management, but, as with earlier OODB products, the XML database products have been niche offerings and not poised to displace DBMSs. Indeed, many XML database products
Recommended publications
  • Mapping out the DBMS Territory
    DATA BASE Mapping out the DBMS territory by Dr. George Schussel, stant both in terms of new product environment and should include the COP announcements and users since following: 1975. Of companies already using a • A DBMS offering data indepen­ Data base and data management DBMS product, 85 percent of these dence products have changed dramatically are not CODASYL, so it seems clear that CODASYL DDL and DML • A data dictionary defining and in the last several years. New hard­ controlling the data environment ware and software technologies standards will not be adopted by the marketplace, at least in their cur­ • A query language allowing user have greatly changed the character personnel access to the data base and number of products available. rent form. Just as the oil crisis hit the auto Mainframe DBMS • A report-generating system al­ makers, the "productivity crisis" The market for full-function trans­ lowing simplified programming has hit the software suppliers mar­ action processing DBMS, with their generation of reports ket. required high-level technical data • A screen mapper allowing gener­ base administration gurus and large ation of data entry screens by a In the 1970s there were no more simple example process than two dozen widely marketed mainframes, is still active but ex­ DBMS product lines. Non-IBM DP tremely competitive. Successful • A user language that is high -level, easy to use and an alternative to shops, using equipment such as vendors of this type of product have procedural languages such as Univac, Honeywell or Burroughs, COBOL and PL/l. simply took the DBMS offered by the hardware vendor.
    [Show full text]
  • Objex, Inc. Product Overview April 2015
    WHERE COMMON SENSE MEETS UNCOMMON INGENUITY ObjEx, Inc. Product Overview April 2015 ObjEx INC PO Box 25608 Scottsdale, AZ 85255-5608 Tel: (480) 588-7776 http://www.obj-ex.com ObjEx, Inc. Product Overview Table of Contents Contents TABLE OF CONTENTS I OVERVIEW 1 PACKAGED SOFTWARE 1 SERVICES 2 OBJEX XMLQUERY 4 OBJEX XMLPUBLISHER 5 OBJEX PROGRAM CALL 6 ABOUT OBJEX 7 2015 ObjEx, Inc. Page i ObjEx, Inc. Product Overview Overview ObjEx was founded in 1996 by former Cullinet and CA executives and employees who have more than 65 years of IDMS experience and 20 years as IDMS Integrators. We provide software and services to our clients which include many large organizations. Packaged Software Our packaged software is conceived and developed as a result of our integration experiences related to Service Oriented Architecture, cloud computing calls to and from IDMS, trigger- based data publishing, and replication. ObjEx products are focused on Service Oriented Architecture (SOA) and are designed to participate within the latest n-tier architectures. ObjEx has been used to: Extend existing IDMS systems with new modern application components either direct client/server or through SOA, all in one database. Allow existing IDMS applications to call out to internal and external web services. Provide modern applications the ability to use existing IDMS DC-Cobol and ADSO programs as web services. This allows the reuse of business rules without the hassles of screen scraping. Push IDMS data to non-mainframe applications without altering IDMS programs. The products are useful for ERP and warehouse integration, data replication to operational data stores, and data warehousing feeds.
    [Show full text]
  • Database Machines in Support of Very Large Databases
    Rochester Institute of Technology RIT Scholar Works Theses 1-1-1988 Database machines in support of very large databases Mary Ann Kuntz Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Kuntz, Mary Ann, "Database machines in support of very large databases" (1988). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Rochester Institute of Technology School of Computer Science Database Machines in Support of Very large Databases by Mary Ann Kuntz A thesis. submitted to The Faculty of the School of Computer Science. in partial fulfillment of the requirements for the degree of Master of Science in Computer Systems Management Approved by: Professor Henry A. Etlinger Professor Peter G. Anderson A thesis. submitted to The Faculty of the School of Computer Science. in partial fulfillment of the requirements for the degree of Master of Science in Computer Systems Management Approved by: Professor Henry A. Etlinger Professor Peter G. Anderson Professor Jeffrey Lasky Title of Thesis: Database Machines In Support of Very Large Databases I Mary Ann Kuntz hereby deny permission to reproduce my thesis in whole or in part. Date: October 14, 1988 Mary Ann Kuntz Abstract Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems.
    [Show full text]
  • The Impact of Converging Information Technologies. Proceedings of the CAUSE National Conference (Monterey, California, December 9-12, 1986)
    DOCUMENT RESUME ED 283 430 HE 020 404 TITLE The Impact of Converging Information Technologies. Proceedings of the CAUSE National Conference (Monterey, California, December 9-12, 1986). INSTITUTION CAUSE, Boulder, Colo. PUB DATE Dec 86 NOTE 586p.; Photographs may not reproduce well. PUB TYFE Collected Works - Conference Proceedings (021) Viewpoints (120) EDRS PRICE MF03/PC24 Plus Postage. DESCRIPTORS *College Administration; College Planning; *Computer Oriented Programs; *Data Processing; Higher Education; Information Networks; *Information Technology; *Management Information Systems; *Microcomputers; Telecommunications; Users (Information) ABSTRACT Proceedings of a 1986 CAUSE conference on the impact of converging information technologies are presented. Topics of conferenco papers include: policy issues in higher education, planning and information technology, people issues in information technology, telecommunications/networking, special environments, microcomputer_issues and applications, and managing academic computing. Some of the papers (with the authors) are: "Distributed Access to Central Data: A Policy Issue" (Eugene W. Carson) "Distributed Access to Central Data: The Cons" (Katherine P. Hall);_ "Overselling Technology: Suppose You Gave a Computer Revolution and Nobody Came?" (Linda Fleit); "Selling the President on the Computing Plan: Strategic Funds Programming" (John L. Green); "A Preliminary Report of Institutional Experieace_with MIS Software" (Paul J. Plourde); "Policy Issues Surrounding Decisions to Use Mainframe or Micros" (Phyllis A. Sholtysi; "Alternative Models for the Delivery of Computing and Communications Services" (E. Michael Staman) "Converging Technologies Require Flexible Organizations" (Carole Barone); "Student Computing and Policy Issues" (Gerald McLaughlin, John A. Muffo, Ralph O. Mueller, Alan R. Sack); "Strategic Planning for Information Resources Management: Putting the Building Blocks Together" (James I. Penrod, Michael G. Dolence); "Planning for Administrative Computing in a Networked Environment" (Cynthia S.
    [Show full text]
  • ICL Technical Journal Volume 6 Issue 3
    TECHNICAL JOURNAL Volume 6 Issue 3 May 1989 Published by INTERNATIONAL COMPUTERS LIMITED at OXFORD UNIVERSITY PRESS ICL TCPU M IPA I The ICL Technical Journal is published twice a year by ' . International Computers Limited at Oxford University JOURNAL Press Editor J. Howlett ICL House, Putney, London SW15 ISW, UK Editorial Board J. Howlett (Editor) F.F. Land H.M. Cropper (F International) (London Business School) D.W. Davies, FRS K.H. Macdonald G.E. Felton M.R. Miller M.D. Godfrey (British Telecom Research (Imperial College, London Laboratories) University) J.M.M. Pinkerton C.H.L. Goodman E.C.P. Portman (STCTechnology Ltd B.C. Warboys (University and King’s College,) of Manchester) London) All correspondence and papers to be considered for publication should be addressed to the Editor. The views expressed in the papers are those of the authors and do not necessarily represent ICL policy. 1989 subscription rates: annual subscription £35 UK, £44 rest of world, US $88 N. America; single issues £17 UK, £22 rest of world, US $38 N. America. Orders with remittances should be sent to the Journals Subscriptions Department, Oxford University Press, Walton Street, Oxford 0X2 6DP, UK. This publication is copyright under the Berne Convention and the Inter­ national Copyright Convention. All rights reserved. Apart from any copying under the UK Copyright Act 1956, part 1, section 7, whereby a single copy of an article may be supplied, under certain conditions, for the purposes of research or private study, by a library of a class prescribed by the UK Board of Trade Regulations (Statutory Instruments 1957, No.
    [Show full text]
  • Data-Bases-Database-Systems.Pdf
    Database Systems Third Edition Paul Beynon-Davies DATABASE SYSTEMS THIRD EDITION DATABASE SYSTEMS THIRD EDITION Paul Beynon-Davies © Paul Beynon-Davies 1996, 2000, 2004 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No paragraph of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, 90 Tottenham Court Road, London W1T 4LP. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The author has asserted his right to be identified as the author of this work in accordance with the Copyright, Designs and Patents Act 1988. First edition 1996 Second edition 2000 Reprinted twice Third edition 2004 Published by PALGRAVE MACMILLAN Houndmills, Basingstoke, Hampshire RG21 6XS and 175 Fifth Avenue, New York, N.Y. 10010 Companies and representatives throughout the world PALGRAVE MACMILLAN is the global academic imprint of the Palgrave Macmillan division of St. Martin’s Press, LLC and of Palgrave Macmillan Ltd. Macmillan is a registered trademark in the United States, United Kingdom and other countries. Palgrave is a registered trademark in the European Union and other countries. ISBN 1–4039–1601–2 This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. A catalogue record for this book is available from the British Library.
    [Show full text]
  • Shopping for a Fourth Generation Language
    If you're in the market for a 4GL, you'd better be in the mood to answer 10 basic questions. SHOPPING FOR A FOURTH GENERATION LANGUAGE by George Schussel question is, do you like the syntax? Actual- ty in decision support and information cen- ly, a broad range of syntax is available for ter applications. Languages like Focus are Fourth generation languages are like a the different ~GLS.Some products like Fo- therefore more effective in nonprocedural good set of tools-when you need them cus, Natural, and Mantis, which are very applications, while languages like Ideal and they come in handy. Today's 4GL software language-oriented, have instructions that Natural, which have procedural features, provides significantly different solutions are written in code or English language-like offer enough detail cpntrol to solve almost for improving productivity compared with statements. Support for both procedural any problem that can be handled by COBOL. third generation COBOL and FORTRAN. The and nonprocedural constructs also exists. semantic and syntactical differences among People who have experience in program- RUN-TIME Next, ~GLbuyers should ~GLSare also far greater than the differ- ming will be very comfortable with lan- VERSION ask, is a run-time version ences between FORTRAN and COBOL. And guage-based formats. of the system required? although 4GLs are similar in some ways- In contrast, other systems do much NEEDED? The best-known 4GLs are most interface to one or more database of their work through function generation. proprieta~ylanguages like Focus, Natural, management systems, provide query facili- Ideal from Applied Data Research (Prince- Mantis, ADS/O, and Sperry's Mapper, ties and report writers, and support screen ton, N.J.), Application Factory from Cor- which combine some interpretive and some painting and some kind of procedural lan- tex Corp.
    [Show full text]
  • Charles Bachman 1973 Turing Award Recipient Interviewed by Gardner Hendrie February 4, 2015 Lexington, Massachusetts Video Inter
    Oral History of Charles W. (Charlie) Bachman Charles Bachman 1973 Turing Award Recipient Interviewed by Gardner Hendrie February 4, 2015 Lexington, Massachusetts Video Interview Transcript This video interview and its transcript were originally produced by the Computer History Museum as part of their Oral History activities and are copyrighted by them. CHM Reference number: X7400.2015 © 2006 Computer History Museum. The Computer History Museum has generously allowed the ACM to use these resources. Page 1 of 58 Oral History of Charles W. (Charlie) Bachman GH: Gardner Hendrie, interviewer CB: Charles Bachman, 1973 Turing Award Recipient GH: …Alright. We have here today with us Charlie Bachman, who’s graciously agreed to do an oral history for the Computer History Museum. Thank you very much Charlie. I think I’d like to start out with a very simple question: What is your earliest recollection of what you wanted to do when you grew up? CB: Well I guess the best answer, I don’t really think of any early recollection... GH: Okay. CB: …about that. I just kind of grew into it. GH: Alright. Now I have read that you decided to become a mechanical engineer relatively early. How did that come about? What influenced you to make that decision? And when did you make it? CB: Well I grew up in East Lansing, Michigan, which is the heart of General Motors and Ford and Chrysler, building automobiles. And so you thought engineers were people who designed cars and things that go with them, and that’s mainly mechanical engineering. So I just somehow thought that was the general course you took in engineering; if you didn’t know what kind of engineer you want to be, you do mechanical engineer.
    [Show full text]
  • Ltr Contract,Awarding Task Order 6, "Technical Mgt
    _ . _ - _ _ _ _ _ _ _ __ _ _ - _ _ _ _ _ _ - - _ _ _ - _ _ _ - _ _ _ _ _ _ _ _ _ - - _ _ . " ' .e .1 m r. * ,,* $. 4. sp. nop,,\, UNITED STATES ! r., NUCLEAR REGULATORY COMMISSION f' p, .i a # j WAumotos,o c. sos 66 o g # . h* w ' *...+ NOV 6 IN7 ; t System Automation Corporation . 0555 Sixteenth Street a Sivler-Spring, MD 20910 Attn Mr. Craig Mattice , Dear Mr. Matticet h . Subjects Task Order No. 6, Entitled, " Technical Management and Database Administration For The Safety Information Network ( S I NET .' " i In accordance with Section G.1.4 entitled " Task Order Procedures" ' j of the subject contract, this letter provides authc.rization to' perform the effort under Task Order No.6. Perf ormance shall be in accordance with the -enclosed Statement of Work, the , Contractor's technical proposal dated October 27, 1997 and cost ' proposal dated November 3, 1997, incorporated herein by reference and the-telephonic changes detailed as follows: The Toi.e1 Fixed Fee for Task Order No. 6.shall-be $7,069.00. Task Order No. 6 shall be in offact from November 9, 1997 i through February 5, 199H with a total cost ceiling of $92,490.00 . The amount of $85,421'.00 represents . the total estierated , - reimbursable cost and the amount of $7,069.00 represents the - fixed fme. The issuance-of this Tattk Drder does not amend any . terms or conditions of the subject contract.. | Accaptance of this Task Order should be made by executing three copies of this document in the space provided and returning _two- copics to the Contract Administrator.
    [Show full text]
  • The Evolution of the Computerized Database
    The Evolution of the Computerized Database Nancy Hartline Bercich Advisor: Dr. Don Goelman Completed: Spring 2003 Department of Computing Sciences Villanova University Villanova, Pennsylvania U.S.A. Database History What is a database? Elsmari and Navathe [ELS00] define a database as a collection of related data. By this definition, a database can be anything from a homemaker’s metal recipe file to a sophisticated data warehouse. Of course, today when we think of databases we seldom think of a simple box of cards. We invariably think of computerized data and their DBMS (database management systems). Non Computerized Databases My first experience with a serious database was in 1984. Asplundh was installing Cullinet’s IDMS/R™ (Integrated Relational Data Management System) manufacturing system in the company’s three manufacturing facilities. When first presented with the idea of a computerized database keeping track of inventory and the manufacturing process, the first plant manager’s response was, “ Why do we need that? We already have a system. We already have a method for keeping track of our inventory and manufacturing process and we like our method. Why change it?” He retrieved, from a metal filing cabinet, several boxes containing thousands of index cards. Each index card recorded data on a specific part, truck, manufacturing process or supplier. He already had a database, a good database that quite possibly predated the modern computer. In the end our task we easy. We simply mechanized his existing database. Flat File Storage Before the advent of modern database technology computerized data was primarily stored in flat files of varying formats, ISAM (Indexed Sequential Access Method) and VSAM (Virtual Storage Access Method) being two of the more common file formats of the time.
    [Show full text]
  • Oral History of Martin A. Goetz
    Oral History of Martin A. Goetz Interviewed by: Luanne Johnson Recorded: March 28, 1996 Via telephone CHM Reference number: X4579.2008 © 1996 Computer History Museum Table of Contents BACKGROUND PRIOR TO ADR...............................................................................................................................3 INITIAL FOCUS OF ADR ...........................................................................................................................................6 GETTING INTO THE SOFTWARE PRODUCTS BUSINESS...................................................................................8 COMPETING AGAINST BUNDLED SOFTWARE FROM IBM ..............................................................................9 EXPANSION OF ADR IN THE 1970S......................................................................................................................10 ACQUISITION OF DATACOMM/DB ......................................................................................................................11 CHALLENGE OF CREATING A NEW MARKET ..................................................................................................13 INTELLECTUAL PROPERTY PROTECTION ISSUES ..........................................................................................14 EARLY SOFTWARE COMPANIES .........................................................................................................................16 ADDENDUM..............................................................................................................................................................19
    [Show full text]
  • 6.830 Database Systems, Lecture 2
    3 data models from the 1970s Hierarchical (IMS + DL/1) Network (Codasyl + Codasyl DML) Relations (Codd proposal + DL/alpha Relational algebra SQL Quel) Themes: Data redundancy Physical data independence Logical data independence High level language Why study ancient history? “Those that do not understand the mistakes of their ancestors will end up repeating them” Use Zoo example (with one more kind of object) 3 objects 2 relationships Animals (Name, species, age, feeding_time)-------------------- | lives_in | Cages (id, size) ---------------------- | cared_for_by | Keepers (Name, address)<------------------------------------------ Each animal in ONE cage, multiple animals can share a cage Each animal cared for by ONE keeper, a keeper cares for multiple animals IMS (IBM 1968) Segment types (record types) Segments (instances) Schema (hierarchical collection of segment types – must be a tree) Possible schemas Keepers Keepers 1) | 2) | Animals Cages | | Cages Animals Instance of 1) Sam Freddie 1 Jimmy 2 Sally 1 All have redundancy! 1) repeat cage info for animals which share a cage 2) repeat cage info for animals in a shared cage with different keepers Bad: possibility of inconsistency Fundamental problem: Keepers Cages | | Animals Cannot be represented as a hierarchy! IMS Storage Root dependents ***** ********* Sequential Sequential Index Sequential Hash Pointer spaghetti Index Pointer spaghetti Note: no indexes on dependent segments! DL/1 Every segment has a hierarchical sequential key (HSK) Key of the segment, prepended by keys of path back to the root All segments are logically in HSK order (for purposes of DL/1) Commands GU [segment type] [predicate] GN GNP D I Find all cages that Sam enters: GU Keepers (name = ‘Sam’) Until no more GNP Cages Find the keepers that enter cage 6 GU Keepers GNP Cages (id = 6) Until no more GN Keepers GNP Cages (id = 6) Notes: GU is really get first Some commands are fast; some are slow; depends on the storage chosen and the schema chosen.
    [Show full text]