Towards a Big Data Reference Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Towards a Big Data Reference Architecture Eindhoven University of Technology Department of Mathematics and Computer Science Master’s thesis Towards a Big Data Reference Architecture 13th October 2013 Author: Markus Maier [email protected] Supervisor: dr. G.H.L. Fletcher g.h.l.fl[email protected] Assessment committee: dr. G.H.L. Fletcher dr. A. Serebrenik dr.ir. I.T.P. Vanderfeesten Abstract Technologies and promises connected to ‘big data’ got a lot of attention lately. Leveraging emerging ‘big data’ sources extends requirements of traditional data management due to the large volume, velocity, variety and veracity of this data. At the same time, it promises to extract value from previously largely unused sources and to use insights from this data to gain a competitive advantage. To gain this value, organizations need to consider new architectures for their data management systems and new technologies to implement these architectures. In this master’s thesis I identify additional requirements that result from these new characteristics of data, design a reference architecture combining several data management components to tackle these requirements and finally discuss current technologies, which can be used to implement the reference architecture. The design of the reference architecture takes an evolutionary approach, building from traditional enterprise data warehouse architecture and integrating additional components aimed at handling these new requirements. Implementing these components involves technologies like the Apache Hadoop ecosystem and so-called ‘NoSQL’ databases. A verification of the reference architecture finally proves it correct and relevant to practice. The proposed reference architecture and a survey of the current state of art in ‘big data’ technologies guides designers in the creation of systems, which create new value from existing, but also previously under-used data. They provide decision makers with entirely new insights from data to base decisions on. These insights can lead to enhancements in companies’ productivity and competitiveness, support innovation and even create entirely new business models. ii Preface This thesis is the result of the final project for my master’s program in Business Information Systems at Eindhoven University of Technology. The project was conducted over a time of 7 months within the Web Engineering (formerly Databases and Hypermedia) group in the Mathematics and Computer Science department. I want to use this place to mention and thank a couple of people. First, I want to express my greatest gratitude to my supervisor George Fletcher for all his advice and feedback, for his engagement and flexibility. Second, I want to thank the members of my assessment committee, Irene Vanderfeesten and Alexander Serebrenik, for reviewing my thesis, attending my final presentation and giving me critical feedback. Finally, I want to thank all the people, family and friends, for their support during my whole studies and especially during my final project. You helped my through some stressful and rough times and I am very thankful to all of you. Markus Maier, Eindhoven, 13th October 2013 iii 1 Introduction 1.1 Motivation Big Data has become one of the buzzwords in IT during the last couple of years. Initially it was shaped by organizations which had to handle fast growth rates of data like web data, data resulting from scientific or business simulations or other data sources. Some of those companies’ business models are fundamentally based on indexing and using this large amount of data. The pressure to handle the growing data amount on the web e.g. lead Google to develop the Google File System [119] and MapReduce [94]. Efforts were made to rebuild those technologies as open source software. This resulted in Apache Hadoop and the Hadoop File System [12, 226] and laid the foundation for technologies summarized today as ‘big data’. With this groundwork traditional information management companies stepped in and invested to extend their software portfolios and build new solutions especially aimed at Big Data analysis. Among those companies were IBM [27, 28], Oracle [32], HP [26], Microsoft [31], SAS [35] and SAP [33, 34]. At the same time start-ups like Cloudera [23] entered the scene. Some of the ‘big data’ solutions are based on Hadoop distributions, others are self-developed and companies’ ‘big data’ portfolios are often blended with existing technologies. This is e.g. the case when big data gets integrated with existing data management solutions, but also for complex event processing solutions which are the basis (but got further developed) to handle stream processing of big data 1. The effort taken by software companies to get part of the big data story is not surprising considering the trends analysts predict and the praise they sing on ‘big data’ and its impact onto business and even society as a whole. IDC predicts in its ‘The Digital Universe’ study that the digital data created and consumed per year will grow up to 40.000 exabyte by 2020, from which a third 2 will promise value to organizations if processed using big data technologies [115]. IDC also states that in 2012 only 0.5% of potentially valuable data were analyzed, calling this the ‘Big Data Gap’. While the McKinsey Global Institute also predicts that the data globally generated is growing by around 40% per year, they furthermore describe big data trends in terms of monetary figures. They project the yearly value of big data analytics for the US health care sector to be around 300 billion $. They also predict a possible value of around 250 billion Ä for the European public sector and a potential improvement of margins in the retail industry by 60% [163]. 1 e.g. IBM InfoSphere Streams [29] 2 around 13.000 exabyte 1 CHAPTER 1. INTRODUCTION With this kind of promises the topic got picked up by business and management journals to emphasize and describe the impact of big data onto management practices. One of the terms coined in that context is ‘data-guided management’ [157]. In MIT Sloan Management Review Thomas H. Davenport discusses how organisations applying and mastering big data differ from organisations with a more traditional approach to data analysis and what they can gain from it [92]. Harvard Business Review published an article series about big data [58, 91, 166] in which they call the topic a ‘management revolution’ and describe how ‘big data’ can change management, how an organisational culture needs to change to embrace big data and what other steps and measures are necessary to make it all work. But the discussion did not stop with business and monetary gains. There are also several publications stressing the potential of big data to revolutionize science and even society as a whole. A community whitepaper written by several US data management researchers states, that a ‘major investment in Big Data, properly directed, can result not only in major scientific advances but also lay the foundation for the next generation of advances in science, medicine, and business’ [45]. Alex Pentland, who is director of MIT’s Human Dynamics Laboratory and considered one of the pioneers of incorporating big data into the social sciences, claims that big data can be a major instrument to ‘reinvent society’ and to improve it in that process [177]. While other researchers often talk about relationships in social networks when talking about big data, Alex Pentland focusses on location data from mobile phones, payment data from credit cards and so on. He describes this data as data about people’s actual behaviour and not so much about their choices for communication. From his point of view, ‘big data is increasingly about real behavior’ [177] and connections between individuals. In essence he argues that this allows the analysis of systems (social, financial etc.) on a more fine-granular level of micro-transactions between individuals and ‘micro-patterns’ within these transactions. He further argues, that this will allow a far more detailed understanding and a far better design of new systems. This transformative potential to change the architecture of societies was also recognized by mainstream media and is brought into public discussion. The New York Times e.g. declared ‘The Age of Big Data’ [157]. There were also books published to describe how big data transforms the way ‘we live, work and think’ [165] to a public audience and to present essays and examples how big data can influence mankind [201]. However the impact of ‘big data’ and where it is going is not without controversies. Chris Anderson, back then editor in chief of Wired magazine, started a discourse, when he announced ‘the end of theory’ and the obsolescence of the scientific method due to big data [49]. In his essay he claimed, that with massive data the scientific method - observe, develop a model and formulate hypothesis, test the hypothesis by conducting experiments and collecting data, analyse and interpret the data - would be obsolete. He argues that all models or theories are erroneous and the use of enough data allows to skip the modelling step and instead leverage statistical methods to find patterns without creating hypothesis first. In that sense he values correlation over causation. This gets apparent in the following quote: Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves. [49] Chris Anderson is not alone with his statement. While they do not consider it the ‘end of theory’ in general, Viktor Mayer-Schönberger and Kenneth Cukier also emphasize on the importance of correlation and favour it over causation [165, pp. 50-72]. Still this is a rather extreme position and is questioned by several other authors. Boyd and Crawford, while not denying its possible value, published an article to provoke an overly positive and simplified point of view of ‘big data’ [73].
Recommended publications
  • Empirical Study on the Usage of Graph Query Languages in Open Source Java Projects
    Empirical Study on the Usage of Graph Query Languages in Open Source Java Projects Philipp Seifer Johannes Härtel Martin Leinberger University of Koblenz-Landau University of Koblenz-Landau University of Koblenz-Landau Software Languages Team Software Languages Team Institute WeST Koblenz, Germany Koblenz, Germany Koblenz, Germany [email protected] [email protected] [email protected] Ralf Lämmel Steffen Staab University of Koblenz-Landau University of Koblenz-Landau Software Languages Team Koblenz, Germany Koblenz, Germany University of Southampton [email protected] Southampton, United Kingdom [email protected] Abstract including project and domain specific ones. Common applica- Graph data models are interesting in various domains, in tion domains are management systems and data visualization part because of the intuitiveness and flexibility they offer tools. compared to relational models. Specialized query languages, CCS Concepts • General and reference → Empirical such as Cypher for property graphs or SPARQL for RDF, studies; • Information systems → Query languages; • facilitate their use. In this paper, we present an empirical Software and its engineering → Software libraries and study on the usage of graph-based query languages in open- repositories. source Java projects on GitHub. We investigate the usage of SPARQL, Cypher, Gremlin and GraphQL in terms of popular- Keywords Empirical Study, GitHub, Graphs, Query Lan- ity and their development over time. We select repositories guages, SPARQL, Cypher, Gremlin, GraphQL based on dependencies related to these technologies and ACM Reference Format: employ various popularity and source-code based filters and Philipp Seifer, Johannes Härtel, Martin Leinberger, Ralf Lämmel, ranking features for a targeted selection of projects.
    [Show full text]
  • Towards an Analytics Query Engine *
    Towards an Analytics Query Engine ∗ Nantia Makrynioti Vasilis Vassalos Athens University of Economics and Business Athens University of Economics and Business Athens, Greece Athens, Greece [email protected] [email protected] ABSTRACT with black box libraries, evaluating various algorithms for a This vision paper presents new challenges and opportuni- task and tuning their parameters, in order to produce an ef- ties in the area of distributed data analytics, at the core of fective model, is a time-consuming process. Things become which are data mining and machine learning. At first, we even more complicated when we want to leverage paralleliza- provide an overview of the current state of the art in the area tion on clusters of independent computers for processing big and then analyse two aspects of data analytics systems, se- data. Details concerning load balancing, scheduling or fault mantics and optimization. We argue that these aspects will tolerance can be quite overwhelming even for an experienced emerge as important issues for the data management com- software engineer. munity in the next years and propose promising research Research in the data management domain recently started directions for solving them. tackling the above issues by developing systems for large- scale analytics that aim at providing higher-level primitives for building data mining and machine learning algorithms, Keywords as well as hiding low-level details of distributed execution. Data analytics, Declarative machine learning, Distributed MapReduce [12] and Dryad [16] were the first frameworks processing that paved the way. However, these initial efforts suffered from low usability, as they offered expressive but at the same 1.
    [Show full text]
  • Evaluation of Xpath Queries on XML Streams with Networks of Early Nested Word Automata Tom Sebastian
    Evaluation of XPath Queries on XML Streams with Networks of Early Nested Word Automata Tom Sebastian To cite this version: Tom Sebastian. Evaluation of XPath Queries on XML Streams with Networks of Early Nested Word Automata. Databases [cs.DB]. Universite Lille 1, 2016. English. tel-01342511 HAL Id: tel-01342511 https://hal.inria.fr/tel-01342511 Submitted on 6 Jul 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Universit´eLille 1 – Sciences et Technologies Innovimax Sarl Institut National de Recherche en Informatique et en Automatique Centre de Recherche en Informatique, Signal et Automatique de Lille These` pr´esent´ee en premi`ere version en vue d’obtenir le grade de Docteur par l’Universit´ede Lille 1 en sp´ecialit´eInformatique par Tom Sebastian Evaluation of XPath Queries on XML Streams with Networks of Early Nested Word Automata Th`ese soutenue le 17/06/2016 devant le jury compos´ede : Carlo Zaniolo University of California Rapporteur Anca Muscholl Universit´eBordeaux Rapporteur Kim Nguyen Universit´eParis-Sud Examinateur Remi Gilleron Universit´eLille 3 Pr´esident du Jury Joachim Niehren INRIA Directeur Abstract The eXtended Markup Language (Xml) provides a format for representing data trees, which is standardized by the W3C and largely used today for exchanging information between all kinds of computer programs.
    [Show full text]
  • QUERYING JSON and XML Performance Evaluation of Querying Tools for Offline-Enabled Web Applications
    QUERYING JSON AND XML Performance evaluation of querying tools for offline-enabled web applications Master Degree Project in Informatics One year Level 30 ECTS Spring term 2012 Adrian Hellström Supervisor: Henrik Gustavsson Examiner: Birgitta Lindström Querying JSON and XML Submitted by Adrian Hellström to the University of Skövde as a final year project towards the degree of M.Sc. in the School of Humanities and Informatics. The project has been supervised by Henrik Gustavsson. 2012-06-03 I hereby certify that all material in this final year project which is not my own work has been identified and that no work is included for which a degree has already been conferred on me. Signature: ___________________________________________ Abstract This article explores the viability of third-party JSON tools as an alternative to XML when an application requires querying and filtering of data, as well as how the application deviates between browsers. We examine and describe the querying alternatives as well as the technologies we worked with and used in the application. The application is built using HTML 5 features such as local storage and canvas, and is benchmarked in Internet Explorer, Chrome and Firefox. The application built is an animated infographical display that uses querying functions in JSON and XML to filter values from a dataset and then display them through the HTML5 canvas technology. The results were in favor of JSON and suggested that using third-party tools did not impact performance compared to native XML functions. In addition, the usage of JSON enabled easier development and cross-browser compatibility. Further research is proposed to examine document-based data filtering as well as investigating why performance deviated between toolsets.
    [Show full text]
  • Programming a Parallel Future
    Programming a Parallel Future Joseph M. Hellerstein Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2008-144 http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-144.html November 7, 2008 Copyright 2008, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Programming a Parallel Future Joe Hellerstein UC Berkeley Computer Science Things change fast in computer science, but odds are good that they will change especially fast in the next few years. Much of this change centers on the shift toward parallel computing. In the short term, parallelism will thrive in settings with massive datasets and analytics. Longer term, the shift to parallelism will impact all software. In this note, I’ll outline some key changes that have recently taken place in the computer industry, show why existing software systems are ill‐equipped to handle this new reality, and point toward some bright spots on the horizon. Technology Trends: A Divergence in Moore’s Law Like many changes in computer science, the rapid drive toward parallel computing is a function of technology trends in hardware. Most technology watchers are familiar with Moore's Law, and the general notion that computing performance per dollar has grown at an exponential rate — doubling about every 18‐24 months — over the last 50 years.
    [Show full text]
  • Declarative Data Analytics: a Survey
    Declarative Data Analytics: a Survey NANTIA MAKRYNIOTI, Athens University of Economics and Business, Greece VASILIS VASSALOS, Athens University of Economics and Business, Greece The area of declarative data analytics explores the application of the declarative paradigm on data science and machine learning. It proposes declarative languages for expressing data analysis tasks and develops systems which optimize programs written in those languages. The execution engine can be either centralized or distributed, as the declarative paradigm advocates independence from particular physical implementations. The survey explores a wide range of declarative data analysis frameworks by examining both the programming model and the optimization techniques used, in order to provide conclusions on the current state of the art in the area and identify open challenges. CCS Concepts: • General and reference → Surveys and overviews; • Information systems → Data analytics; Data mining; • Computing methodologies → Machine learning approaches; • Software and its engineering → Domain specific languages; Data flow languages; Additional Key Words and Phrases: Declarative Programming, Data Science, Machine Learning, Large-scale Analytics ACM Reference Format: Nantia Makrynioti and Vasilis Vassalos. 2019. Declarative Data Analytics: a Survey. 1, 1 (February 2019), 36 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION With the rapid growth of world wide web (WWW) and the development of social networks, the available amount of data has exploded. This availability has encouraged many companies and organizations in recent years to collect and analyse data, in order to extract information and gain valuable knowledge. At the same time hardware cost has decreased, so storage and processing of big data is not prohibitively expensive even for smaller companies.
    [Show full text]
  • Towards an Integrated Graph Algebra for Graph Pattern Matching with Gremlin
    Towards an Integrated Graph Algebra for Graph Pattern Matching with Gremlin Harsh Thakkar1, S¨orenAuer1;2, Maria-Esther Vidal2 1 Smart Data Analytics Lab (SDA), University of Bonn, Germany 2 TIB & Leibniz University of Hannover, Germany [email protected], [email protected] Abstract. Graph data management (also called NoSQL) has revealed beneficial characteristics in terms of flexibility and scalability by differ- ently balancing between query expressivity and schema flexibility. This peculiar advantage has resulted into an unforeseen race of developing new task-specific graph systems, query languages and data models, such as property graphs, key-value, wide column, resource description framework (RDF), etc. Present-day graph query languages are focused towards flex- ible graph pattern matching (aka sub-graph matching), whereas graph computing frameworks aim towards providing fast parallel (distributed) execution of instructions. The consequence of this rapid growth in the variety of graph-based data management systems has resulted in a lack of standardization. Gremlin, a graph traversal language, and machine provide a common platform for supporting any graph computing sys- tem (such as an OLTP graph database or OLAP graph processors). In this extended report, we present a formalization of graph pattern match- ing for Gremlin queries. We also study, discuss and consolidate various existing graph algebra operators into an integrated graph algebra. Keywords: Graph Pattern Matching, Graph Traversal, Gremlin, Graph Alge- bra 1 Introduction Upon observing the evolution of information technology, we can observe a trend arXiv:1908.06265v2 [cs.DB] 7 Sep 2019 from data models and knowledge representation techniques being tightly bound to the capabilities of the underlying hardware towards more intuitive and natural methods resembling human-style information processing.
    [Show full text]
  • Licensing Information User Manual
    Oracle® Database Express Edition Licensing Information User Manual 18c E89902-02 February 2020 Oracle Database Express Edition Licensing Information User Manual, 18c E89902-02 Copyright © 2005, 2020, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or “commercial computer software documentation” pursuant to the applicable Federal
    [Show full text]
  • Large Scale Querying and Processing for Property Graphs Phd Symposium∗
    Large Scale Querying and Processing for Property Graphs PhD Symposium∗ Mohamed Ragab Data Systems Group, University of Tartu Tartu, Estonia [email protected] ABSTRACT Recently, large scale graph data management, querying and pro- cessing have experienced a renaissance in several timely applica- tion domains (e.g., social networks, bibliographical networks and knowledge graphs). However, these applications still introduce new challenges with large-scale graph processing. Therefore, recently, we have witnessed a remarkable growth in the preva- lence of work on graph processing in both academia and industry. Querying and processing large graphs is an interesting and chal- lenging task. Recently, several centralized/distributed large-scale graph processing frameworks have been developed. However, they mainly focus on batch graph analytics. On the other hand, the state-of-the-art graph databases can’t sustain for distributed Figure 1: A simple example of a Property Graph efficient querying for large graphs with complex queries. Inpar- ticular, online large scale graph querying engines are still limited. In this paper, we present a research plan shipped with the state- graph data following the core principles of relational database systems [10]. Popular Graph databases include Neo4j1, Titan2, of-the-art techniques for large-scale property graph querying and 3 4 processing. We present our goals and initial results for querying ArangoDB and HyperGraphDB among many others. and processing large property graphs based on the emerging and In general, graphs can be represented in different data mod- promising Apache Spark framework, a defacto standard platform els [1]. In practice, the two most commonly-used graph data models are: Edge-Directed/Labelled graph (e.g.
    [Show full text]
  • Big Data Analytic Approaches Classification
    Big Data Analytic Approaches Classification Yudith Cardinale1 and Sonia Guehis2,3 and Marta Rukoz2,3 1Dept. de Computacion,´ Universidad Simon´ Bol´ıvar, Venezuela 2Universite´ Paris Nanterre, 92001 Nanterre, France 3Universite´ Paris Dauphine, PSL Research University, CNRS, UMR[7243], LAMSADE, 75016 Paris, France Keywords: Big Data Analytic, Analytic Models for Big Data, Analytical Data Management Applications. Abstract: Analytical data management applications, affected by the explosion of the amount of generated data in the context of Big Data, are shifting away their analytical databases towards a vast landscape of architectural solutions combining storage techniques, programming models, languages, and tools. To support users in the hard task of deciding which Big Data solution is the most appropriate according to their specific requirements, we propose a generic architecture to classify analytical approaches. We also establish a classification of the existing query languages, based on the facilities provided to access the Big Data architectures. Moreover, to evaluate different solutions, we propose a set of criteria of comparison, such as OLAP support, scalability, and fault tolerance support. We classify different existing Big Data analytics solutions according to our proposed generic architecture and qualitatively evaluate them in terms of the criteria of comparison. We illustrate how our proposed generic architecture can be used to decide which Big Data analytic approach is suitable in the context of several use cases. 1 INTRODUCTION nally, despite these increases in scale and complexity, users still expect to be able to query data at interac- The term Big Data has been coined for represent- tive speeds. In this context, several enterprises such ing the challenge to support a continuous increase as Internet companies and Social Network associa- on the computational power that produces an over- tions have proposed their own analytical approaches, whelming flow of data (Kune et al., 2016).
    [Show full text]
  • A Comparison Between Cypher and Conjunctive Queries
    A comparison between Cypher and conjunctive queries Jaime Castro and Adri´anSoto Pontificia Universidad Cat´olicade Chile, Santiago, Chile 1 Introduction Graph databases are one of the most popular type of NoSQL databases [2]. Those databases are specially useful to store data with many relations among the entities, like social networks, provenance datasets or any kind of linked data. One way to store graphs is property graphs, which is a type of graph where nodes and edges can have attributes [1]. A popular graph database system is Neo4j [4]. Neo4j is used in production by many companies, like Cisco, Walmart and Ebay [4]. The data model is like a property graph but with small differences. Its query language is Cypher. The expressive power is not known because currently Cypher does not have a for- mal definition. Although there is not a standard for querying graph databases, this paper proposes a comparison between Cypher, because of the popularity of Neo4j, and conjunctive queries which are arguably the most popular language used for pattern matching. 1.1 Preliminaries Graph Databases. Graphs can be used to store data. The nodes represent objects in a domain of interest and edges represent relationships between these objects. We assume familiarity with property graphs [1]. Neo4j graphs are stored as property graphs, but the nodes can have zero or more labels, and edges have exactly one type (and not a label). Now we present the model we work with. A Neo4j graph G is a tuple (V; E; ρ, Λ, τ; Σ) where: 1. V is a finite set of nodes, and E is a finite set of edges such that V \ E = ;.
    [Show full text]
  • Introduction to Graph Database with Cypher & Neo4j
    Introduction to Graph Database with Cypher & Neo4j Zeyuan Hu April. 19th 2021 Austin, TX History • Lots of logical data models have been proposed in the history of DBMS • Hierarchical (IMS), Network (CODASYL), Relational, etc • What Goes Around Comes Around • Graph database uses data models that are “spiritual successors” of Network data model that is popular in 1970’s. • CODASYL = Committee on Data Systems Languages Supplier (sno, sname, scity) Supply (qty, price) Part (pno, pname, psize, pcolor) supplies supplied_by Edge-labelled Graph • We assign labels to edges that indicate the different types of relationships between nodes • Nodes = {Steve Carell, The Office, B.J. Novak} • Edges = {(Steve Carell, acts_in, The Office), (B.J. Novak, produces, The Office), (B.J. Novak, acts_in, The Office)} • Basis of Resource Description Framework (RDF) aka. “Triplestore” The Property Graph Model • Extends Edge-labelled Graph with labels • Both edges and nodes can be labelled with a set of property-value pairs attributes directly to each edge or node. • The Office crew graph • Node �" has node label Person with attributes: <name, Steve Carell>, <gender, male> • Edge �" has edge label acts_in with attributes: <role, Michael G. Scott>, <ref, WiKipedia> Property Graph v.s. Edge-labelled Graph • Having node labels as part of the model can offer a more direct abstraction that is easier for users to query and understand • Steve Carell and B.J. Novak can be labelled as Person • Suitable for scenarios where various new types of meta-information may regularly
    [Show full text]