Representations and Optimizations for Embedded Parallel Dataflow Languages

Total Page:16

File Type:pdf, Size:1020Kb

Representations and Optimizations for Embedded Parallel Dataflow Languages Representations and Optimizations for Embedded Parallel Dataflow Languages vorgelegt von M.Sc. Alexander Alexandrov geb. in Sofia, Bulgarien von der Fakultät IV - Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften - Dr.-Ing. - genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. Odej Kao Gutachter: Prof. Dr. Volker Markl Gutachterin: Prof. Dr. Mira Mezini Gutachter: Prof. Dr. Torsten Grust Tag der wissenschaftlichen Aussprache: 31. Oktober 2018 Berlin 2019 In memory of my grandfather, who taught me how to count when I was very young and Prof. Hartmut Ehrig, who taught me how to comprehend counting twenty years later. Acknowledgments I would like to express my gratitude to the people who made this dissertation possible. First and foremost, I would like to thank my advisor Prof. Dr. Volker Markl. He offered me the chance to work in the area of data management and encouraged me to search for a motivating topic. Throughout my time at the Database and Information Systems Group at TU Berlin, his constant engagement and valuable advice allowed me to substantially improve the quality of my research. I am also deeply indebted to everybody who contributed to the Emma project. Asterios Katsifodimos and Andreas Kunft showed tremendous dedication and work ethic and played an essential role in bringing the original SIGMOD 2015 submission to an accept- able shape under a very tight deadline. Georgi Krastev was instrumental in shaping the design and implementation of the compiler internals. Without his passion for functional programming and sharp eye for elegant API design, the software artifact accompanying this thesis would undoubtedly have ended up in a much more rudimentary form. Gábor Gévay influenced the story presented in this thesis with a number of incisive comments. Most notably, he rigorously pointed out that state-of-the-art solutions fall into the cate- gory of deeply embedded DSLs, which forced me to pinpoint quotation-based embedding as the crux to the proposed solution. Andreas Salzmann developed the GUI for the demonstrator, and Felix Schüler and Bernd Louis contributed a number of algorithms to the Emma library. This work represents a natural fusion between two distinct lines of research. Stephan Ewen and Fabian Hüske developed the original PACT programming model, and I was lucky enough to work with both of them during my time at DIMA. The adopted categor- ical approach highlights the timeless relevance of the foundational research conducted by Phil Wadler, Peter Buneman and Val Tannen, and Torsten Grust in the 1990s. The intimate connection between the two areas was pointed out by Alin Deutsch during a visit at UC San Diego in the autumn of 2012. Last but not least, I am also grateful to my family and friends for their continuous love and support, to all past and future teachers of mine for their shared knowledge, and to Katya Tasheva, Emma Greenfield, and Petra Nachtmanova for the music. i Declaration of Authorship I, Alexander Alexandrov, declare that this thesis, titled “Representations and Optimiza- tions for Embedded Parallel Dataflow Languages”, and the work presented in it are my own. I confirm that: • This work was done wholly or mainly while in candidature for a research degree at this University. • Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated. • Where I have consulted the published work of others, this is always clearly at- tributed. • Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. • I have acknowledged all main sources of help. • Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Berlin, February 12, 2019 .......................... iii Abstract Parallel dataflow engines such as Apache Hadoop, Apache Spark, and Apache Flink have emerged as an alternative to relational databases more suitable for the needs of modern data analysis applications. One of the main characteristics of these systems is their scalable programming model, based on distributed collections and parallel transformations. Notable examples are Flink’s DataSet and Spark’s RDD programming abstractions. The programming model is typically realized as an eDSL – a domain specific language embedded in a general-purpose host language such as Java, Scala, or Python. This approach has several advantages over traditional stand-alone DSLs such as SQL or XQuery. First, it allows for reuse of linguistic constructs from the host language – for example, anonymous functions syntax, value definitions, or fluent syntax via method chaining. This eases the learning curve for developers already familiar with the host language syntax. Second, it allows for seamless integration of library methods written in the host language via the function parameters passed to the parallel dataflow operators. This reduces the development effort for dataflows that go beyond pure SQL and require domain-specific logic, for example for text or image pre-processing. At the same time, state-of-the-art parallel dataflow eDSLs exhibit a number of shortcom- ings. First, one of the main advantages of a stand-alone DSL such as SQL – the high-level, declarative Select-From-Where syntax – is either lost or mimicked in a non-standard way. Second, execution aspects such as caching, join order, and partial aggregation need to be decided by the programmer. Automatic optimization is not possible due to the limited program context reflected in the eDSL intermediate representation (IR). In this thesis, we argue that these limitations are a side effect of the adopted type-based embedding approach. As a solution, we propose an alternative eDSL design based on quasi-quotations. We present a DSL embedded in Scala and discuss its compiler pipeline, IR, and some of the enabled optimizations. We promote the algebraic type of bags in union representation as a model for distributed collections, and its associated structural recursion scheme and monad as a model for parallel collection processing. At the source code level, Scala’s for-comprehensions can be used to encode Select-From-Where expressions in a standard way. At the IR level, maintaining comprehensions as a first- class citizen can be used to simplify the analysis and implementation of holistic dataflow optimizations that accommodate for nesting and control flow. The proposed DSL design therefore reconciles the benefits of embedded parallel dataflow DSLs with the declarativity and optimization potential of external DSLs such as SQL. v Zusammenfassung Parallele Datenflusssysteme wie Apache Hadoop, Apache Spark und Apache Flink haben sich als Alternative von relationalen Datenbanken etabliert, die für die Anforderungen moderner Datenanalyseanwendungen besser geeignet ist. Zu den Hauptmerkmalen dieser Systeme gehört ein auf verteilten Datenkollektionen und parallelen Transformationen basierendes Programmiermodell. Beispiele dafür sind die DataSet und RDD Programmier- schnittstellen von Flink und Spark. Diese Schnittstellen werden in der Regel als eDSLs realisiert, d.h. als domänenspezifische Sprachen, die in einer Hostsprache wie Java, Scala oder Python eingebettet sind. Dieser Ansatz bietet mehrere Vorteile gegenüber herkömmlichen externen DSLs wie SQL oder XQuery. Zum einen kann man bei einer eDSL syntaktische Konstrukte aus der Host- Sprache wiederverwenden. Dies verringert die Lernkurve für Entwickler, die bereits mit der Syntax der Hostsprache vertraut sind. Zum anderen ermöglicht der Ansatz eine nahtlose Integration von Bibliotheksmethoden, die in der Hostsprache verfügbar sind, und reduziert somit den Entwicklungsaufwand für Datenflüsse, die über reines SQL hinausgehen und domänenspezifische Logik erfordern. Gleichzeitig weisen eDSLs wie DataSet und RDD eine Reihe von Nachteilen auf. Erstens ist einer der Hauptvorteile von externen DSLs wie SQL - die deklarative Select-From-Where- Syntax - entweder verloren oder auf eine nicht-standardisierte Weise nachgeahmt. Zweitens werden Ausführungsaspekte wie Caching, Join-Reihenfolge und verteilte Aggregate vom Programmierer manuell festgelegt. Eine automatische Optimierung ist aufgrund des begrenzten Programmkontexts in der eDSL-Zwischenrepräsentation nicht möglich. Wir zeigen, dass diese Einschränkungen als Nebeneffekt des auf Typen basierenden Ein- bettungsansatzes verursacht werden. Als Lösung schlagen wir ein alternatives Design vor, das auf Quasi-Quotations basiert. Wir präsentieren eine Scala eDSL und diskutieren deren Compiler, Zwischenrepräsentation, sowie einigen von den ermöglichten Optimierungen. Als Grundlage für das verteilte Datenmodell benutzen wir den algebraischen Typ von Kol- lektionen in Union-Repräsentation, und für die parallele Datenverarbeitung – die damit verbundenen strukturelle Rekursion und Monade. Auf der Quellcode-Ebene kann man Comprehensions über die Monade verwenden, um Select-From-Where Ausdrücke in einer Standardform zu kodieren. In der Zwischenrepräsentation bieten Comprehensions eine Basis, auf der man Datenflussoptimierungen einfacher gestalten kann. Das vorgeschlagene Design vereinigt somit die Vorteile von eingebetteten parallelen Datenfluss-DSLs mit der deklarativen Natur und Optimierungspotenzial von externen DSLs wie SQL. vii Contents Acknowledgments i Declaration
Recommended publications
  • CS145 Lecture Notes #15 Introduction to OQL
    CS145 Lecture Notes #15 Introduction to OQL History Object-oriented DBMS (OODBMS) vendors hoped to take market share from traditional relational DBMS (RDBMS) vendors by offer- ing object-based data management Extend OO languages (C++, SmallTalk) with support for persis- tent objects RDBMS vendors responded by adding object support to relational systems (i.e., ORDBMS) and largely kept their customers OODBMS vendors have survived in another market niche: software systems that need some of their data to be persistent (e.g., CAD) Recall: ODMG: Object Database Management Group ODL: Object De®nition Language OQL: Object Query Language Query-Related Features of ODL Example: a student can take many courses but may TA at most one interface Student (extent Students, key SID) { attribute integer SID; attribute string name; attribute integer age; attribute float GPA; relationship Set<Course> takeCourses inverse Course::students; relationship Course assistCourse inverse Course::TAs; }; interface Course (extent Courses, key CID) { attribute string CID; attribute string title; relationship Set<Student> students inverse Student::takeCourses; relationship Set<Student> TAs inverse Student::assistCourse; }; Jun Yang 1 CS145 Spring 1999 For every class we can declare an extent, which is used to refer to the current collection of all objects of that class We can also declare methods written in the host language Basic SELECT Statement in OQL Example: ®nd CID and title of the course assisted by Lisa SELECT s.assistCourse.CID, s.assistCourse.title FROM Students
    [Show full text]
  • EYEDB Object Query Language
    EYEDB Object Query Language Version 2.8.8 December 2007 Copyright c 1994-2008 SYSRA Published by SYSRA 30, avenue G´en´eral Leclerc 91330 Yerres - France home page: http://www.eyedb.org Contents 1 Introduction....................................... ............... 5 2 Principles ........................................ ............... 5 3 OQLvs.ODMG3OQL.................................... ........... 5 4 Language Concepts . ........... 7 5 Language Syntax . .......... 8 5.1 TerminalAtomSyntax.................................. .......... 8 5.2 Non Terminal Atom Production . 10 5.3 Keywords.......................................... 11 5.4 Comments.......................................... 11 5.5 Statements ........................................ 12 5.6 ExpressionStatements............................... 12 5.7 AtomicLiteralExpressions ............................ 14 5.8 ArithmeticExpressions ............................. 14 5.9 AssignmentExpressions .............................. 18 5.10 Auto Increment & Decrement Expressions . 19 5.11 Comparison Expressions . 20 5.12 Logical Expressions . 24 5.13 Conditional Expression . 24 5.14 ExpressionSequences ............................... 25 5.15 ArrayDeferencing ................................. 25 5.16 IdentifierExpressions .............................. 28 5.17 PathExpressions.................................... 32 5.18 FunctionCall....................................... 33 5.19 MethodInvocation................................... 34 5.20 Eval/Unval Operators . 37 5.21 SetExpressions...................................
    [Show full text]
  • SOQL and SOSL Reference
    SOQL and SOSL Reference Version 53.0, Winter ’22 @salesforcedocs Last updated: August 26, 2021 © Copyright 2000–2021 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark of salesforce.com, inc., as are other names and marks. Other marks appearing herein may be trademarks of their respective owners. CONTENTS Chapter 1: Introduction to SOQL and SOSL . 1 Chapter 2: Salesforce Object Query Language (SOQL) . 3 Typographical Conventions in This Document . 5 Quoted String Escape Sequences . 5 Reserved Characters . 6 Alias Notation . 7 SOQL SELECT Syntax . 7 Condition Expression Syntax (WHERE Clause) . 9 fieldExpression Syntax . 13 USING SCOPE . 26 ORDER BY . 27 LIMIT . 28 OFFSET . 29 FIELDS() . 31 Update an Article’s Keyword Tracking with SOQL . 35 Update an Article Viewstat with SOQL . 35 WITH filteringExpression . 35 GROUP BY . 40 HAVING . 47 TYPEOF . 48 FORMAT () . 51 FOR VIEW . 51 FOR REFERENCE . 52 FOR UPDATE . 52 Aggregate Functions . 53 Date Functions . 58 Querying Currency Fields in Multi-Currency Orgs . 60 Example SELECT Clauses . 62 Relationship Queries . 63 Understanding Relationship Names . 64 Using Relationship Queries . 65 Understanding Relationship Names, Custom Objects, and Custom Fields . 66 Understanding Query Results . 68 Lookup Relationships and Outer Joins . 70 Identifying Parent and Child Relationships . 70 Understanding Relationship Fields and Polymorphic Fields . 72 Understanding Relationship Query Limitations . 77 Using Relationship Queries with History Objects . 77 Contents Using Relationship Queries with Data Category Selection Objects . 78 Using Relationship Queries with the Partner WSDL . 78 Change the Batch Size in Queries . 78 SOQL Limits on Objects . 79 SOQL with Big Objects . 83 Syndication Feed SOQL and Mapping Syntax .
    [Show full text]
  • Query Service Specification
    Query Service Specification Version 1.0 New Edition: April 2000 Copyright 1999, FUJITSU LIMITED Copyright 1999, INPRISE Corporation Copyright 1999, IONA Technologies PLC Copyright 1999, Objectivity Inc. Copyright 1991, 1992, 1995, 1996, 1999 Object Management Group, Inc. Copyright 1999, Oracle Corporation Copyright 1999, Persistence Software Inc. Copyright 1999, Secant Technologies Inc. Copyright 1999, Sun Microsystems Inc. The companies listed above have granted to the Object Management Group, Inc. (OMG) a nonexclusive, royalty-free, paid up, worldwide license to copy and distribute this document and to modify this document and distribute copies of the modified version. Each of the copyright holders listed above has agreed that no person shall be deemed to have infringed the copyright in the included material of any such copyright holder by reason of having used the specification set forth herein or having conformed any computer software to the specification. PATENT The attention of adopters is directed to the possibility that compliance with or adoption of OMG specifications may require use of an invention covered by patent rights. OMG shall not be responsible for identifying patents for which a license may be required by any OMG specification, or for conducting legal inquiries into the legal validity or scope of those patents that are brought to its attention. OMG specifications are prospective and advisory only. Prospective users are responsible for protecting themselves against liability for infringement of patents. NOTICE The information contained in this document is subject to change without notice. The material in this document details an Object Management Group specification in accordance with the license and notices set forth on this page.
    [Show full text]
  • Sciql, a Query Language for Science Applications
    SciQL, A Query Language for Science Applications M. Kersten, N. Nes, Y. Zhang, M. Ivanova CWI, Netherlands ABSTRACT technology has long been recognized [6, 34, 15, 18, 9, 20, 16, 4, Scientific applications are still poorly served by contemporary re- 31]. In particular, an efficient implementation of array and time lational database systems. At best, the system provides a bridge series concepts is missing [15, 34, 24, 1]. The main problems en- towards an external library using user-defined functions, explicit countered with relational systems in science can be summed up as import/export facilities or linked-in Java/C# interpreters. Time has (a) the impedance mismatch between query language and array ma- come to rectify this with SciQL1, a SQL-query language for science nipulation, (b) SQL is verbose for simple array expressions, (c) AR- applications with arrays as first class citizens. It provides a seam- RAYs are not first class citizens, (d) ingestion of Tera-bytes data is less symbiosis of array-, set-, and sequence- interpretation using too slow. The traditional DBMS simply carries too much overhead. a clear separation of the mathematical object from its underlying Moreover, much of the science processing involves use of standard storage representation. libraries, e.g., Linpack, and statistics tools, e.g., R. Their interac- The language extends value-based grouping in SQL with struc- tion with a database is often confined to a simplified import/export tural grouping, i.e., fixed-sized and unbounded groups based on dataset facility. A workflow management system is indispensable explicit relationships between its index attributes.
    [Show full text]
  • Object Persistence
    Object Persistence Object Oriented Programming Introduction • One of the most critical tasks that applications have to perform is to save and restore data • Persistence is the storage of data from working memory so that it can be restored when the application is run again • In object-oriented systems, there are several ways in which objects can be made persistent • The choice of persistence method is an important part of the design of an application Object Oriented Programming Object Serialization Object Oriented Programming Object Serialization • Simple persistence method which provides a program the ability to read or write a whole object to and from a stream of bytes • Allows Java objects to be encoded into a byte stream suitable for streaming to a file on disk or over a network • The class must implement the Serializable interface (java.io.Serializable), which does not declare any methods, and have accessors and mutators for its attributes Object Oriented Programming Object Serialization // create output stream File file = new File("teams_serialize.ser"); String fullPath = file.getAbsolutePath(); fos = new FileOutputStream(fullPath); // open output stream and store team t1 out = new ObjectOutputStream(fos); out.writeObject(t1); Object Oriented Programming Using Databases • Most Client-Server applications use a RDBMS as their data store while using an object-oriented programming language for development • Objects must be mapped to tables in the database and vice versa • Applications generally require the use of SQL statements embedded
    [Show full text]
  • Object Database Management Systems: Concepts and Features
    nIST Special Publication 500-179 Computer Systems Object Database Technology Management Systems: U.S. DEPARTMENT OF COMMERCE National Institute of Concepts and Features Standards and Technology Christopher E. Dabrowski Nisr Elizabeth N. Fong Deyuan Yang NAT L INST. OF STAND & TECH R.I.C, A111D3 3aTS7t, REFERENCE NIST PUBLICATIONS -QC" 100 U57 500-179 1990 NATIONAL MSTrrUTE OF STANDARDS & TECHNOLOGY Research Informatm Center Gaithersburg, MD 20899 NIST Special Publication 500-179 t Object Database Management Systems: Concepts and Features Christopher E. Dabrowski Elizabeth N. Fong Deyuan Yang National Computer Systems Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 April 1990 U.S. DEPARTMENT OF COMMERCE Robert A. Mosbacher, Secretary NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY John W. Lyons, Director Reports on Computer Systems Technology The National Institute of Standards and Technology (NIST) (formerly the National Bureau of Standards) has a unique responsibility for computer systems technology within the Federal government. NIST's National Computer Systems Laboratory (NCSL) develops standards and guidelines, provides technical assistance, and conducts research for computers and related telecommunications systems to achieve more effective utilization of Federal information technology resources. NCSL's responsibilities include development of technical, management, physical, and administrative standards and guidelines for the cost-effective security and privacy of sensitive unclassified information processed in Federal computers. NCSL assists agencies in developing security plans and in improving computer security awareness train- ing. This Special Publication 500 series reports NCSL research and guidelines to Federal agencies as well as to organizations in industry, government, and academia. National Institute of Standards and Technology Special Publication 500-179 Natl.
    [Show full text]
  • 1 Query Languages This Section Lists All the Query Languages That the Query Language Investigation Team Determined Were Relevant Possibilities
    1 Query Languages This section lists all the query languages that the Query Language Investigation team determined were relevant possibilities: 1.1 XPath Reference: http://www.w3.org/TR/xpath20/ 1.2 XQuery Reference: http://www.w3.org/XML/Query 1.3 XQL Reference: http://www.ibiblio.org/xql/xql-proposal.html 1.4 OQL Reference: http://www.odmg.org/ 1.5 SQL Reference: http://www.ansi.org/ 1.6 DMQL2 Reference: http://www.rets-wg.org/docs/index.html 2 XPath Query Language Comparison 2.1 XPath Language Definition XPath was designed to be used (by XSLT and XPointer) to address parts of an XML document either by location or by pattern matching capabilities exercised on content at those locations. Pattern matching is facilitated via string manipulation, numeric operations, and Boolean logic functionality. The language uses a compact, non-XML syntax that operates on the hierarchical structure of an XML document. Each XPath expression operates starting from the current context (a node within the document) and yields a node set, a boolean value, a number, or a string; expressions can be nested. The language consists of 37 tokens and 27 functions. 13 of the tokens are related to location, 14 to pattern matching, and 10 to the lexical structure. The functions deal with node identification, string manipulation, conversion to and numeric manipulation, and Boolean evaluation. Given that it operates on an XML document, to be used as the RETS query language between a client and a server XPath would require that the client and server share a common XML document representation of the data to be searched.
    [Show full text]
  • Apache Calcite for Enabling SQL Access to Nosql Data Systems Such As Apache Geode Christian Tzolov Whoami Christian Tzolov
    Apache Calcite for Enabling SQL Access to NoSQL Data Systems such as Apache Geode Christian Tzolov Whoami Christian Tzolov Engineer at Pivotal, Big-Data, Hadoop, Spring Cloud Dataflow, Apache Geode, Apache HAWQ, Apache Committer, Apache Crunch PMC member [email protected] blog.tzolov.net twitter: @christzolov https://nl.linkedin.com/in/tzolov Disclaimer This talk expresses my personal opinions. It is not read or approved by Pivotal and does not necessarily reflect the views and opinions of Pivotal nor does it constitute any official communication of Pivotal. Pivotal does not support any of the code shared here. 2 Big Data Landscape 2016 • Volume • Velocity • Varity • Scalability • Latency • Consistency vs. Availability (CAP) 3 Data Access • {Old | New} SQL • Custom APIs – Key / Value – Fluent APIs – REST APIs • {My} Query Language Unified Data Access? At What Cost? 4 SQL? • Apache Apex • SQL-Gremlin • Apache Drill … • Apache Flink • Apache Geode • Apache Hive • Apache Kylin • Apache Phoenix • Apache Samza • Apache Storm • Cascading • Qubole Quark 5 Geode Adapter - Overview SQL/JDBC/ODBC Parse SQL, converts into relational expression and Apache Calcite optimizes Push down the relational expressions supported by Geode Spring Data API for Enumerable OQL and falls back to the Calcite interacting with Geode Adapter Enumerable Adapter for the rest Convert SQL relational Spring Data Geode Adapter expressions into OQL queries Geode (Geode Client) Geode API and OQL Geode Server Geode Server Geode Server Data Data Data SQL Relational Expressions
    [Show full text]
  • ODL, OQL and SQL3 Object Definition Language
    ODL, OQL and SQL3 CSC 436 – Fall 2003 * Notes kindly borrowed from D R AZIZ AIT -BRAHAM, School of Computing, IS & Math, South Bank University 1 Object Definition Language oObjectives – Portability of database schemas across ODBMS – Interoperability of ODBMSs from multiple vendors oDevelopment principles – All semantic constructs of ODMG object model – Specification language (not full programming language) – Program language independence – OMG IDL (Interface Definition Language) compatibility – Practical and short time implementable 2 Type Specification oA type is defined by specifying its interface in ODL oTop-level BNF type definition ::= interface <type_name> [:<supertype_list>} { [<type_property_list>] [<property_list>] [<operation_list>] }; oAny list may be omitted if not applicable 3 Type Characteristics Specification oType characteristics – Supertypes – Extent naming – Key(s) oExample interface Professor :Person { extent professors; keys faculty_id, soc_sec_no; <property_list> <operation_list }; oNotes – No more than one extent or key definition. Each attribute or relationship traversal name in key definition should be specified in the property list – Extent naming and key definition may appear in any order – Supertype, extent naming and key definition may be omitted if not applicable 4 Property List oBNF <property_list ::== <property_spec>; | <property_spec><property_list> <property_spec>::= <attribute_spec> | <relationship_spec> oStructured types have bracketed list of field-type pairs associated with them. oEnumerated types
    [Show full text]
  • A Query Language for Analyzing Information Networks
    BiQL: a query language for analyzing information networks Anton Dries1,2, Siegfried Nijssen1 and Luc De Raedt1 1 Katholieke Universiteit Leuven, Belgium 2 Universitat Pompeu Fabra, Barcelona, Spain Abstract. One of the key steps in data analysis is the exploration of data. For traditional relational data, this process is facilitated by rela- tional database management systems and the aggregates and rankings they can compute. However, for the exploration of graph data, relational databases may not be most practical and scalable. Many tasks related to exploration of information networks involve computation and analy- sis of connections (e.g. paths) between concepts. Traditional relational databases o↵er no specific support for performing such tasks. For in- stance, a statistic such as the shortest path between two given nodes cannot be computed by a relational database. Surprisingly, tools for querying graph and network databases are much less well developed than for relational data, and only recently an increasing number of studies are devoted to graph or network databases. Our position is that the devel- opment of such graph databases is important both to make basic graph mining easier and to prepare data for more complex types of analysis. In this chapter, we present the BiQL data model for representing and manipulating information networks. The BiQL data model consists of two parts: a data model describing objects, link, domains and networks, and a query language describing basic network manipulations. The main focus here lies on data preparation and data analysis, and less on data mining or knowledge discovery tasks directly. 1 Introduction Information networks are a popular way of representing information.
    [Show full text]
  • 002.02 Comparing the Object and the Relational Data Model
    Paterson_656-0 C03.fm Page 31 Friday, March 31, 2006 3:50 PM CHAPTER 3 ■ ■ ■ Comparing the Object and Relational Data Models When people talk about databases, they almost always mean relational databases. This wasn’t always the case, though, as databases existed before the relational data model was developed. Now, the case for considering alternatives has become stronger with the increasing dominance of object-oriented languages in a widening range of application areas, and with the emergence of native object databases such as db4o. It’s vital to understand the benefits and limitations of today’s data models, particularly in the context of object-oriented systems. This chapter describes the evolution of data models, including the relational and the object-based “post-relational” models, and compares their characteristics, both good and bad. It discusses the strategies needed to make the relational model work with object-oriented systems. It then describes how object databases reflect the features expected in relational databases. The next chapter will examine the object data model in detail, and explain how this model is applied in db4o. Data Models Before the first DBMS was developed, programs accessed data from flat files. These did not allow representation of logical data relationships or enforcement of data integrity. Data modeling has developed over successive generations since the 1960s to provide applications with more powerful data storage features. In this chapter we will look at the differences between data models, concentrating on the relational model and the object model. Generally speaking, data models have evolved in three generations. The early generation data models tend to refuse to completely go away, however.
    [Show full text]