Graphs: Use Cases, Analytics and Linking

Total Page:16

File Type:pdf, Size:1020Kb

Graphs: Use Cases, Analytics and Linking Big Knowledge Graphs: Use Cases, Analytics and Linking Company Importance and Similarity Demo Atanas Kiryakov LAMBDA Big Data School, Belgrade, June, 2019 Presentation Outline o Introduction o GraphDB o Use Cases o Market Intelligence Vision o Concept and Entity Awareness via Big Knowledge Graphs o FactForge: Showcase KG with 2B Statements o KG Analytics: Similarity and Importancе Mission We help enterprises to get better insights by interlinking: o Diverse databases & Unstructured information o Proprietary & Global data We master Knowledge graphs, combining several AI technologies: o Graph analytics, Text mining, Computer vision o Symbolic reasoning & Machine learning Essential Facts o Leader ✓ Semantic technology vendor established year 2000 ✓ Part of Sirma Group: 400 persons, listed at Sofia Stock Exchange o Profitable and growing ✓ HQ and R&D in Sofia, Bulgaria ✓ More than 70% of the commercial revenues from London and New York o Innovator: Attracted more than €10M in R&D funding o Trendsetter ✓ Member of: W3C, EDMC, ODI, LDBC, STI, DBPedia Foundation, Pistoia Alliance Ontotext GraphDB™ - Flagman Product Source: db-engines.com popularity ranking of graph databases Note: This is not ranking by revenues – such information is not available for most of the vendors Fancy Stuff and Heavy Lifting o We do advanced analytics: We predicted BREXIT ✓ 14 Jun 2016 whitepaper: #BRExit Twitter Analysis: More Twitter Users Want to Split with EU and Support #Brexit https://ontotext.com/white-paper-brexit-twitter-analysis/ o But most of the time we do the heavy lifting of data integration and information extraction ✓ Enabling data scientists can do fancy things Technology Excellence o Unique: GraphDBTM + Text mining o Enterprise robust: powers BBC.CO.UK/SPORT and FT.COM o Serving the most knowledge intensive enterprises: What is Knowledge Graph? o KG represents a collection of interlinked descriptions of entities – real-world objects where: ✓ Descriptions have a formal structure that allows both people and computers to process them in an efficient and unambiguous manner; ✓ Entity descriptions contribute to one another, forming a network, where each entity represents part of the description of the entities, related to it. • The Knowledge Graph can be seen as a specific type of: ✓ Database, because it can be queried via structured queries; ✓ Graph, because it can be analyzed as any other network data structure; ✓ Knowledge base, because the data in it bears formal semantics, which can be used to interpret the data and infer new facts. Discovery in Knowledge Graphs o Find suspicious patterns like: ✓ Company in USA ✓ Controls another company in USA ✓ Through a company in an off-shore zone o Show news relevant to these companies Text Analytics: Annotate Semantic Disambiguation Content Get Suggestions Entity Detection from Vocabulary Apple : Organisation Tim Cook : Person, CEO NLP Pipeline Suggestions Tim Cook : Person, Footballer Samsung : Organisation Language Detection POS Apple CEO Tim Cook was Disambiguation ... at a conference with the Apple : Organisation Vocabulary Gazetteer CEO of Samsung. Tim Tim Cook : Person, CEO Dynamic explained how smart Tim Cook : Person, Footballer ... phones are changing the Samsung : Organisation Vocabulary ... consumer electronics market. GraphDB Relevance Disambiguation Vocabulary 87% - Tim Cook : Person, CEO ... 68% - Apple : Organisation 56% - Samsung : Organisation Relevance Ranking Approach and Applications Portfolio Presentation Outline o Introduction o GraphDB o Use Cases o Market Intelligence Vision o Concept and Entity Awareness via Big Knowledge Graphs o FactForge: Showcase KG with 2B Statements o KG Analytics: Similarity and Importancе GraphDB Essentials o Scalable RDF / SPARQL engine ✓ W3C standards support o Platform independent (100% Java) o Open source API ✓ Main contributor to RDF4J project o Reasoning and consistency checking ✓ UNIQUE! Efficient reasoning support for big data sets across the full lifecycle of the data: load, query, updates Architecture GraphDB Workbench GraphDB Engine User friendly interface for database REST API for database access administration Plugin / Connectors GraphDB Workbench o SPARQL editor & autocomplete o Schema visualization o Graph exploration o Database monitoring and administration 11/20/202 0 GraphDB Workbench o Generation of RDF from structured sources o Data cleaning and transformations o Integration with OpenRefine and GREL language Visual Graph #18 GraphDB Enterprise: Resilience & Availability Features Free Standard Enterprise RDF 1.1 support SPARQL 1.1 support RDFS, OWL2 RL and QL reasoning Efficient query execution Workbench interface Community support Unlimited number of CPU cores Commercial support Connectors for Elasticsearch & SOLR High-availability cluster Managed service High Availability Cluster Architecture o Improved resilience Multi-DC Data Governance ✓ failover, dynamic configuration GraphDB Cluster o Improved query bandwidth Master R+W Master RO ✓ larger cluster means more queries per unit time o Multiple data centres deployment Worker 1 Worker 2 Worker 3 o Integration with search engines Connector Connector Connector o Integration with MongoDB SOLR/ES SOLR/ES SOLR/ES GraphDB Benchmarking o LDBC: TPC-like benchmarks for graph databases o Members include: Ontotext, OpenLink, neo4j, CWI, UPM, ORACLE, IBM, *Sparsity o LDBC Semantic Publishing Benchmark ✓ Based on BBC’s Dynamic Semantic Publishing editorial workflow ✓ Updates, adding new content metadata or updating the reference knowledge (e.g. new people) ✓ Aggregation queries retrieve content according to various criteria (e.g. to generate a topic web page) ✓ The only benchmark that involves reasoning and updates Clients reading / writing Reads/s Writes/s LDBC SPB Results of GraphDB 0 / 1 0.0000 11.4067 0 / 2 0.0000 14.3033 0 / 4 0.0000 14.6700 o 0 / 8 0.0000 15.1067 CPU: 1 x E5-1650 1 / 0 17.8258 0.0000 4 / 0 43.0833 0.0000 o RAM: 20G heap 8 / 0 70.3767 0.0000 16 / 0 83.2633 0.0000 o Dataset: LDBC SPB 256 8 / 2 52.5667 9.2867 8 / 4 54.0233 9.6167 8 / 8 54.9067 9.5733 o DB: GraphDB SE 8.0, RDF 10 / 2 59.9467 8.5333 Statements: 10 / 4 62.2867 8.4767 10 / 8 61.7167 8.6067 254,948,985 (explicit), 480,405,141 (total) 16 / 2 68.8100 5.0600 16 / 4 70.3900 5.1067 o 16 / 8 70.2300 4.9967 Creative works: 8,821,535 16 / 16 70.9467 5.0567 GraphDB SE vs AWS Neptune o Berlin SPARQL Benchmark (BSBM) – 100M scale ✓ No inference (because Neptune does not support inference) ✓ Established for many years ✓ Requested help from AWS Neptune to get the best possible results GraphDB SE AWS Neptune Version 8.6 1.0.1.0200237.0 AWS instance r4.large (2 vCPU, 15.25G RAM) db.r4.large (2 vCPU, 15.25G RAM) Storage EBS (gp1) ? Data loading protocol HTTP POST (RDF4J) Load TTL from an S3 bucket Load type ACID-compliant Non-ACID compliant GraphDB SE vs AWS Neptune (2) GraphDB SE AWS Neptune Neptune/GraphDB RDF import operation (100M RDF triples dataset) Lower is better Loading time 1,895 12,149 641% Read queries (1 client vs 100M RDF triples dataset) Query 1 QPS 309.96 41.25 13% Query 2 QPS 255.23 67.77 27% Query 3 QPS 289.98 39.73 14% Query 4 QPS 232.70 37.71 16% Query 5 QPS 23.18 2.20 9% Higher is better Query 7 QPS 171.75 39.78 23% Query 8 QPS 229.17 36.33 16% Query 9 QPS 406.16 115.93 29% Query 10 QPS 234.97 37.43 16% Query 11 QPS 266.93 63.97 24% Query 12 QPS 249.47 102.32 41% Total 20,122.74 2,679.15 13% Presentation Outline o Introduction o GraphDB o Use Cases o Market Intelligence Vision o Concept and Entity Awareness via Big Knowledge Graphs o FactForge: Showcase KG with 2B Statements o KG Analytics: Similarity and Importancе 2010: Semantic Publishing in BBC Use Case o Goals ✓ Create a dynamic semantic publishing platform that assembles web pages on- the-fly using a variety of data sources ✓ Deliver highly relevant data to web site visitors with sub-second response "The goal is to be able to more easily and accurately aggregate content, find it and share it across many sources. From these simple relationships and building blocks you can dynamically build up incredibly rich sites and navigation on any platform." John O’Donovan, Chief Technical Architect, BBC Use Cases in Media o Dynamic Semantic Publishing ✓ Client: BBC ✓ Task: Power dynamic media website with 1000s of topical pages ✓ Projects: BBC.CO.UK/Sport and BBC’s London 2012 Olympics websites ✓ Technology challenges: text analysis; reasoning; database load that combines frequent updates with high query throughput o Metadata management for Scientific Publishers ✓ Client: Elsevier, John Wiley ✓ Task: Manage large volume of rich and complex metadata about scientific articles Use Cases in Healthcare and Life Sciences o Semantic Medical Coding ✓ Client: Insurance companies, EMR processing etc. ✓ Transforms raw patient data into structured knowledge ✓ Enrich data by applying medical ontologies (SNOMED, LOINC and UMLS) ✓ Load extracted and normalised information in the medical Knowledge Graph. o Data Integration for Pharma Insights ✓ Client: Pharmaceutical and biotech companies ✓ Unifies both public & private data sources, structured knowledge extracted by text mining & semantic data integration ✓ Combine internal and standard public terminology like MedDRA and SNOMED Use Cases in Compliance o Adverse Media Monitoring ✓ Client: Top-5 US bank ✓ Task: monitor negative news and media about people of interest and related entities ✓ News sourced by Factiva; Factiva’s adverse events coding does not meet client’s needs o Compliance with changing regulations ✓ Client:
Recommended publications
  • Ontotext Platform Documentation Release 3.4 Ontotext
    Ontotext Platform Documentation Release 3.4 Ontotext Apr 16, 2021 CONTENTS 1 Overview 1 1.1 30,000ft ................................................ 2 1.2 Layered View ............................................ 2 1.2.1 Application Layer ...................................... 3 1.2.1.1 Ontotext Platform Workbench .......................... 3 1.2.1.2 GraphDB Workbench ............................... 4 1.2.2 Service Layer ........................................ 5 1.2.2.1 Semantic Objects (GraphQL) ........................... 5 1.2.2.2 GraphQL Federation (Apollo GraphQL Federation) ............... 5 1.2.2.3 Text Analytics Service ............................... 6 1.2.2.4 Annotation Service ................................ 7 1.2.3 Data Layer ......................................... 7 1.2.3.1 Graph Database (GraphDB) ........................... 7 1.2.3.2 Semantic Object Schema Storage (MongoDB) ................. 7 1.2.3.3 Semantic Objects for MongoDB ......................... 8 1.2.3.4 Semantic Object for Elasticsearch ........................ 8 1.2.4 Authentication and Authorization ............................. 8 1.2.4.1 FusionAuth ..................................... 8 1.2.4.2 Semantic Objects RBAC ............................. 9 1.2.5 Kubernetes ......................................... 9 1.2.5.1 Ingress and GW .................................. 9 1.2.6 Operation Layer ....................................... 10 1.2.6.1 Health Checking .................................. 10 1.2.6.2 Telegraf ....................................... 10
    [Show full text]
  • Data Platforms Map from 451 Research
    1 2 3 4 5 6 Azure AgilData Cloudera Distribu2on HDInsight Metascale of Apache Kaa MapR Streams MapR Hortonworks Towards Teradata Listener Doopex Apache Spark Strao enterprise search Apache Solr Google Cloud Confluent/Apache Kaa Al2scale Qubole AWS IBM Azure DataTorrent/Apache Apex PipelineDB Dataproc BigInsights Apache Lucene Apache Samza EMR Data Lake IBM Analy2cs for Apache Spark Oracle Stream Explorer Teradata Cloud Databricks A Towards SRCH2 So\ware AG for Hadoop Oracle Big Data Cloud A E-discovery TIBCO StreamBase Cloudera Elas2csearch SQLStream Data Elas2c Found Apache S4 Apache Storm Rackspace Non-relaonal Oracle Big Data Appliance ObjectRocket for IBM InfoSphere Streams xPlenty Apache Hadoop HP IDOL Elas2csearch Google Azure Stream Analy2cs Data Ar2sans Apache Flink Azure Cloud EsgnDB/ zone Platforms Oracle Dataflow Endeca Server Search AWS Apache Apache IBM Ac2an Treasure Avio Kinesis LeanXcale Trafodion Splice Machine MammothDB Drill Presto Big SQL Vortex Data SciDB HPCC AsterixDB IBM InfoSphere Towards LucidWorks Starcounter SQLite Apache Teradata Map Data Explorer Firebird Apache Apache JethroData Pivotal HD/ Apache Cazena CitusDB SIEM Big Data Tajo Hive Impala Apache HAWQ Kudu Aster Loggly Ac2an Ingres Sumo Cloudera SAP Sybase ASE IBM PureData January 2016 Logic Search for Analy2cs/dashDB Logentries SAP Sybase SQL Anywhere Key: B TIBCO Splunk Maana Rela%onal zone B LogLogic EnterpriseDB SQream General purpose Postgres-XL Microso\ Ry\ X15 So\ware Oracle IBM SAP SQL Server Oracle Teradata Specialist analy2c PostgreSQL Exadata
    [Show full text]
  • Benchmarking RDF Query Engines: the LDBC Semantic Publishing Benchmark
    Benchmarking RDF Query Engines: The LDBC Semantic Publishing Benchmark V. Kotsev1, N. Minadakis2, V. Papakonstantinou2, O. Erling3, I. Fundulaki2, and A. Kiryakov1 1 Ontotext, Bulgaria 2 Institute of Computer Science-FORTH, Greece 3 OpenLink Software, Netherlands Abstract. The Linked Data paradigm which is now the prominent en- abler for sharing huge volumes of data by means of Semantic Web tech- nologies, has created novel challenges for non-relational data manage- ment technologies such as RDF and graph database systems. Bench- marking, which is an important factor in the development of research on RDF and graph data management technologies, must address these challenges. In this paper we present the Semantic Publishing Benchmark (SPB) developed in the context of the Linked Data Benchmark Council (LDBC) EU project. It is based on the scenario of the BBC media or- ganisation which makes heavy use of Linked Data Technologies such as RDF and SPARQL. In SPB a large number of aggregation agents pro- vide the heavy query workload, while at the same time a steady stream of editorial agents execute a number of update operations. In this paper we describe the benchmark’s schema, data generator, workload and re- port the results of experiments conducted using SPB for the Virtuoso and GraphDB RDF engines. Keywords: RDF, Linked Data, Benchmarking, Graph Databases 1 Introduction Non-relational data management is emerging as a critical need in the era of a new data economy where heterogeneous, schema-less, and complexly structured data from a number of domains are published in RDF. In this new environment where the Linked Data paradigm is now the prominent enabler for sharing huge volumes of data, several data management challenges are present and which RDF and graph database technologies are called to tackle.
    [Show full text]
  • Graphdb-Free.Pdf
    GraphDB Free Documentation Release 8.5 Ontotext Jun 17, 2019 CONTENTS 1 General 1 1.1 About GraphDB...........................................2 1.2 Architecture & components.....................................2 1.2.1 Architecture.........................................2 1.2.1.1 RDF4J.......................................3 1.2.1.2 The Sail API....................................4 1.2.2 Components.........................................4 1.2.2.1 Engine.......................................4 1.2.2.2 Connectors.....................................5 1.2.2.3 Workbench.....................................5 1.3 GraphDB Free............................................5 1.3.1 Comparison of GraphDB Free and GraphDB SE......................6 1.4 Connectors..............................................6 1.5 Workbench..............................................6 2 Quick start guide 9 2.1 Run GraphDB as a desktop installation...............................9 2.1.1 On Windows........................................ 10 2.1.2 On Mac OS......................................... 10 2.1.3 On Linux.......................................... 10 2.1.4 Configuring GraphDB................................... 10 2.1.5 Stopping GraphDB..................................... 11 2.2 Run GraphDB as a stand-alone server................................ 11 2.2.1 Running GraphDB..................................... 11 2.2.1.1 Options...................................... 11 2.2.2 Configuring GraphDB................................... 12 2.2.2.1 Paths and network settings...........................
    [Show full text]
  • Remote Sensing
    Remote Sens. 2015, 7, 9473-9491; doi:10.3390/rs70709473 OPEN ACCESS remote sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Article Improving the Computational Performance of Ontology-Based Classification Using Graph Databases Thomas J. Lampoltshammer 1,2,* and Stefanie Wiegand 3 1 School of Information Technology and Systems Management, Salzburg University of Applied Sciences, Urstein Süd 1, Puch, Salzburg 5412, Austria 2 Department of Geoinformatics (Z_GIS), University of Salzburg, Schillerstrasse 30, Salzburg 5020, Austria 3 IT Innovation Centre, University of Southampton, Gamma House, Enterprise Road, Southampton SO16 7NS, UK; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +43-50-2211 (ext. 1311); Fax: +43-50-2211 (ext. 1349). Academic Editors: Ioannis Gitas and Prasad S. Thenkabail Received: 31 March 2015 / Accepted: 17 July 2015 / Published: 22 July 2015 Abstract: The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography) represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi-) automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task.
    [Show full text]
  • Ontop: Answering SPARQL Queries Over Relational Databases
    Undefined 0 (0) 1 1 IOS Press Ontop: Answering SPARQL Queries over Relational Databases Diego Calvanese a, Benjamin Cogrel a, Sarah Komla-Ebri a, Roman Kontchakov b, Davide Lanti a, Martin Rezk a, Mariano Rodriguez-Muro c, and Guohui Xiao a a Free University of Bozen-Bolzano {calvanese,bcogrel, sakomlaebri,dlanti,mrezk,xiao}@inf.unibz.it b Birkbeck, University of London [email protected] c IBM TJ Watson [email protected] Abstract. In this paper we present Ontop, an open-source Ontology Based Data Access (OBDA) system that allows for querying relational data sources through a conceptual representation of the domain of interest, provided in terms of an ontology, to which the data sources are mapped. Key features of Ontop are its solid theoretical foundations, a virtual approach to OBDA that avoids materializing triples and that is implemented through query rewriting techniques, extensive optimizations exploiting all elements of the OBDA architecture, its compliance to all relevant W3C recommendations (including SPARQL queries, R2RML mappings, and OWL 2 QL and RDFS ontologies), and its support for all major relational databases. Keywords: Ontop, OBDA, Databases, RDF, SPARQL, Ontologies, R2RML, OWL 1. Introduction vocabulary, models the domain, hides the structure of the data sources, and can enrich incomplete data with Over the past 20 years we have moved from a background knowledge. Then, queries are posed over world where most companies had one all-knowing this high-level conceptual view, and the users no longer self-contained central database to a world where com- need an understanding of the data sources, the relation panies buy and sell their data, interact with several between them, or the encoding of the data.
    [Show full text]
  • Annotation of Existing Databases Using Semantic Web Technologies: Making Data More FAIR
    Annotation of existing databases using Semantic Web technologies: making data more FAIR Johan van Soest1,2, Ananya Choudhury1, Nikhil Gaikwad1, Matthijs Sloep1, Michel Dumontier2, Andre Dekker1 1 Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Devel- opmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands 2 Institute of Data Science, Maastricht university, Maastricht, The Netherlands *[email protected] Abstract. Making data FAIR is an elaborate task. Hospitals and/or departments have to invest into technologies usually unknown and often do not have the re- sources to make data FAIR. Our work aims to provide a framework and tooling where users can easily make their data (more) FAIR. This framework uses RDF and OWL-based inferencing to annotate existing databases or comma-separated files. For every database, a custom ontology is build based on the database schema, which can be annotated to describe matching standardized terminolo- gies. In this work, we describe the tooling developed, and the current imple- mentation in an institutional datawarehouse pertaining over 3000 rectal cancer patients. We report on the performance (time) of the extraction and annotation process by the developed tooling. Furthermore, we do show that annotation of existing databases using OWL2-based reasoning is possible. Furthermore, we show that the ontology extracted from existing databases can provide a descrip- tion framework to describe and annotate existing data sources. This would tar- get mostly the “Interoperable” aspect of FAIR. Keywords: FAIR, annotations, terminologies, linked data. 1 Introduction Semantic interoperability has been a topic in medical informatics since the introduc- tion of the digital patient chart [1].
    [Show full text]
  • Graphdb Free Documentation Release 8.8
    GraphDB Free Documentation Release 8.8 Ontotext Mar 13, 2019 CONTENTS 1 General 1 1.1 About GraphDB...........................................2 1.2 Architecture & components.....................................2 1.2.1 Architecture.........................................2 1.2.1.1 RDF4J.......................................3 1.2.1.2 The Sail API....................................4 1.2.2 Components.........................................4 1.2.2.1 Engine.......................................4 1.2.2.2 Connectors.....................................5 1.2.2.3 Workbench.....................................5 1.3 GraphDB Free............................................5 1.3.1 Comparison of GraphDB Free and GraphDB SE......................6 1.4 Connectors..............................................6 1.5 Workbench..............................................6 2 Quick start guide 9 2.1 Run GraphDB as a desktop installation...............................9 2.1.1 On Windows........................................ 10 2.1.2 On Mac OS......................................... 10 2.1.3 On Linux.......................................... 10 2.1.4 Configuring GraphDB................................... 10 2.1.5 Stopping GraphDB..................................... 11 2.2 Run GraphDB as a stand-alone server................................ 11 2.2.1 Running GraphDB..................................... 11 2.2.1.1 Options...................................... 11 2.2.2 Configuring GraphDB................................... 12 2.2.2.1 Paths and network settings...........................
    [Show full text]
  • Graphdb SE Documentation Release 7.2
    GraphDB SE Documentation Release 7.2 Ontotext Oct 28, 2016 CONTENTS 1 General 1 1.1 About GraphDB...........................................2 1.2 Architecture & components.....................................2 1.2.1 Architecture.........................................2 Sesame............................................3 The SAIL API........................................4 1.2.2 Components.........................................4 Engine............................................4 Connectors..........................................5 Workbench..........................................5 1.3 GraphDB SE.............................................5 1.3.1 Comparison of GraphDB Free and GraphDB SE......................6 1.4 GraphDB SE in the cloud......................................6 1.4.1 Overview..........................................6 1.4.2 Amazon Web Services...................................6 1.4.3 Pricing details........................................7 1.4.4 Setup and usage.......................................7 1.5 Connectors..............................................7 1.6 Workbench..............................................7 1.6.1 How to use it........................................8 2 Quick start guide 9 2.1 Start the database...........................................9 2.1.1 Run GraphDB as a stand-alone server...........................9 Running GraphDB......................................9 Configuring GraphDB....................................9 Stopping the database.................................... 10 2.2 Set
    [Show full text]
  • Downloaded from ORCA, Cardiff University's Institutional Repository
    This is an Open Access document downloaded from ORCA, Cardiff University's institutional repository: http://orca.cf.ac.uk/105344/ This is the author’s version of a work that was submitted to / accepted for publication. Citation for final published version: Hippolyte, J.-L., Rezgui, Y., Li, H., Jayan, B. and Howell, S. 2018. Ontology-driven development of web services to support district energy applications. Automation in Construction 86 , pp. 210- 225. 10.1016/j.autcon.2017.10.004 file Publishers page: http://dx.doi.org/10.1016/j.autcon.2017.10.004 <http://dx.doi.org/10.1016/j.autcon.2017.10.004> Please note: Changes made as a result of publishing processes such as copy-editing, formatting and page numbers may not be reflected in this version. For the definitive version of this publication, please refer to the published source. You are advised to consult the publisher’s version if you wish to cite this paper. This version is being made available in accordance with publisher policies. See http://orca.cf.ac.uk/policies.html for usage policies. Copyright and moral rights for publications made available in ORCA are retained by the copyright holders. Ontology-driven development of web services to support district energy applications J.-L. Hippolytea,∗, Y.Rezguia, H. Lia, B. Jayana, S. Howella aBRE Trust Centre for Sustainable Engineering, Cardiff School of Engineering, Queens Buildings, The Parade, Cardiff, CF24 3AA, United Kingdom Abstract Current urban and district energy management systems lack a common se- mantic referential for effectively interrelating intelligent sensing, data models and energy models with visualization, analysis and decision support tools.
    [Show full text]
  • RDF Triplestores and SPARQL Endpoints
    RDF triplestores and SPARQL endpoints Lecturer: Mathias Bonduel [email protected] LDAC summer school 2019 – Lisbon, Portugal Lecture outline • Storing RDF data: RDF triplestores o Available methods to store RDF data o RDF triplestores o Triplestore applications – databases – default graph – named graphs o List of triplestore applications o Comparing triplestores o Relevant triplestore settings o Communication with triplestores • Distributing RDF data: SPARQL endpoints o Available methods to distribute RDF data o SPARQL endpoints o Reuse of SPARQL queries o SPARQL communication protocol: requests and responses June 18, 2019 RDF triplestores and SPARQL endpoints | Mathias Bonduel 2 Storing RDF data June 18, 2019 RDF triplestores and SPARQL endpoints | Mathias Bonduel 3 Available methods to store RDF data • In-memory storage (local RAM) o Working memory of application (e.g. client side web app, desktop app) o Frameworks/libraries: RDFLib (Python), rdflib.js (JavaScript), N3 (JavaScript), rdfstore-js (JavaScript), Jena (Java), RDF4J (Java), dotNetRDF (.NET), etc. o Varied support for SPARQL querying • Persistent storage (storage drive) o RDF file/dump (diff. RDF serializations): TTL, RDF/XML, N-Quads, JSON-LD, N-triples, TriG, N3, TriX, RDFa (RDF embedded in HTML), etc. o RDF triplestore o (ontology editing applications: Protégé, Topbraid Composer, etc.) June 18, 2019 RDF triplestores and SPARQL endpoints | Mathias Bonduel 4 RDF triplestores “a database to store and query RDF triples” • Member of the family of graph/NoSQL databases • Data structure: RDF • Main query language: SPARQL standards • Oftentimes support for RDFS/OWL/rules reasoning • Data storage is typically persistent June 18, 2019 RDF triplestores and SPARQL endpoints | Mathias Bonduel 5 Triplestore applications – databases - default graph - named graphs • An RDF triplestore instance (application) can have one or multiple databases (repositories) • Each database has one default graph and zero or more named graphs o a good practice is to place TBox in a separate named graph.
    [Show full text]
  • Similarity Search in Knowledge Graphs: Adapting the Vector Space Model
    Similarity Search in Knowledge Graphs: Adapting the Vector Space Model Atanas Kiryakov1 and Svetla Boytcheva1;2 1 Ontotext, Sirma AI, Sofia, Bulgaria fatanas.kiryakov,[email protected] 2 Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Bulgaria [email protected] Abstract. Exploring diverse knowledge graphs with SPARQL queries requires a laborious process of determining the appropriate predicates, classes and graph patterns. Another drawback is that such structured queries represent Boolean search without relevance ranking, which is im- practical for flexible querying of big volumes of data. We present an experiment of adaptation of the Vector Space Model (VSM) document retrieval technique for knowledge graphs. As a first demonstration we implemented SPARQL queries, which retrieve similar cities in FactForge - a graph of more than 2 billion statements, combining DBPedia, GeoN- ames and other data. The basic methods from this paper are augmented with graph analytics and embedding techniques and formally evaluated in [2] Keywords: Knowledge Graphs · Similarity · Graph Embedding . 1 Motivation In a big data era, characterized by 5V3 (volume, velocity, variety, veracity, and value) knowledge management is a quite challenging task and requires the de- sign and development of new smart and efficient solutions. One prominent new paradigm are the so-called Knowledge Graphs (KG), which put data in context via linking and semantic metadata and this way provide a framework for data integration, unification, analytics and sharing. Given a critical mass of domain knowledge and good level of connectivity, KGs can serve as context that helps computers comprehend and manipulate data. Data in KG is accessed via structured query languages such as SPARQL4.
    [Show full text]