Rdflib Documentation

Total Page:16

File Type:pdf, Size:1020Kb

Rdflib Documentation rdflib Documentation Release 4.2.2 RDFLib Team January 29, 2017 Contents 1 Getting started 3 2 In depth 19 3 Reference 31 4 For developers 177 5 Indices and tables 191 Python Module Index 193 i ii rdflib Documentation, Release 4.2.2 RDFLib is a pure Python package work working with RDF. RDFLib contains most things you need to work with RDF, including: • parsers and serializers for RDF/XML, N3, NTriples, N-Quads, Turtle, TriX, RDFa and Microdata. • a Graph interface which can be backed by any one of a number of Store implementations. • store implementations for in memory storage and persistent storage on top of the Berkeley DB. • a SPARQL 1.1 implementation - supporting SPARQL 1.1 Queries and Update statements. Contents 1 rdflib Documentation, Release 4.2.2 2 Contents CHAPTER 1 Getting started If you never used RDFLib, click through these 1.1 Getting started with RDFLib 1.1.1 Installation RDFLib is open source and is maintained in a GitHub repository. RDFLib releases, current and previous are listed on PyPi The best way to install RDFLib is to use easy_install or pip: $ easy_install rdflib Support is available through the rdflib-dev group: http://groups.google.com/group/rdflib-dev and on the IRC channel #rdflib on the freenode.net server The primary interface that RDFLib exposes for working with RDF is a Graph. The package uses various Python idioms that offer an appropriate way to introduce RDF to a Python programmer who hasn’t worked with RDF before. RDFLib graphs are not sorted containers; they have ordinary set operations (e.g. add() to add a triple) plus methods that search triples and return them in arbitrary order. RDFLib graphs also redefine certain built-in Python methods in order to behave in a predictable way; they emulate container types and are best thought of as a set of 3-item triples: [ (subject, predicate, object), (subject1, predicate1, object1), ... (subjectN, predicateN, objectN) ] A tiny usage example: import rdflib g= rdflib.Graph() result=g.parse("http://www.w3.org/People/Berners-Lee/card") print("graph has %s statements."% len(g)) 3 rdflib Documentation, Release 4.2.2 # prints graph has 79 statements. for subj, pred, obj in g: if (subj, pred, obj) not in g: raise Exception("It better be!") s=g.serialize(format='n3') A more extensive example: from rdflib import Graph, Literal, BNode, Namespace, RDF, URIRef from rdflib.namespace import DC, FOAF g= Graph() # Create an identifier to use as the subject for Donna. donna= BNode() # Add triples using store's add method. g.add( (donna, RDF.type, FOAF.Person) ) g.add( (donna, FOAF.nick, Literal("donna", lang="foo")) ) g.add( (donna, FOAF.name, Literal("Donna Fales")) ) g.add( (donna, FOAF.mbox, URIRef("mailto:[email protected]")) ) # Iterate over triples in store and print them out. print("--- printing raw triples ---") for s, p, o in g: print((s, p, o)) # For each foaf:Person in the store print out its mbox property. print("--- printing mboxes ---") for person in g.subjects(RDF.type, FOAF.Person): for mbox in g.objects(person, FOAF.mbox): print(mbox) # Bind a few prefix, namespace pairs for more readable output g.bind("dc", DC) g.bind("foaf", FOAF) print( g.serialize(format='n3')) Many more examples can be found in the examples folder in the source distribution. 1.2 Loading and saving RDF 1.2.1 Reading an NT file RDF data has various syntaxes (xml, n3, ntriples, trix, etc) that you might want to read. The simplest format is ntriples, a line-based format. Create the file demo.nt in the current directory with these two lines: <http://bigasterisk.com/foaf.rdf#drewp> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person>. <http://bigasterisk.com/foaf.rdf#drewp> <http://example.com/says> "Hello world". You need to tell RDFLib what format to parse, use the format keyword-parameter to parse(), you can pass either a mime-type or the name (a list of available parsers is available). If you are not sure what format your file will be, you can use rdflib.util.guess_format() which will guess based on the file extension. 4 Chapter 1. Getting started rdflib Documentation, Release 4.2.2 In an interactive python interpreter, try this: from rdflib import Graph g= Graph() g.parse("demo.nt", format="nt") len(g) # prints 2 import pprint for stmt in g: pprint.pprint(stmt) # prints : (rdflib.term.URIRef('http://bigasterisk.com/foaf.rdf#drewp'), rdflib.term.URIRef('http://example.com/says'), rdflib.term.Literal(u'Hello world')) (rdflib.term.URIRef('http://bigasterisk.com/foaf.rdf#drewp'), rdflib.term.URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type'), rdflib.term.URIRef('http://xmlns.com/foaf/0.1/Person')) The final lines show how RDFLib represents the two statements in the file. The statements themselves are just length-3 tuples; and the subjects, predicates, and objects are all rdflib types. 1.2.2 Reading remote graphs Reading graphs from the net is just as easy: g.parse("http://bigasterisk.com/foaf.rdf") len(g) # prints 42 The format defaults to xml, which is the common format for .rdf files you’ll find on the net. RDFLib will also happily read RDF from any file-like object, i.e. anything with a .read method. 1.3 Creating RDF triples 1.3.1 Creating Nodes RDF is a graph where the nodes are URI references, Blank Nodes or Literals, in RDFLib represented by the classes URIRef, BNode, and Literal. URIRefs and BNodes can both be thought of as resources, such a person, a company, a web-site, etc. A BNode is a node where the exact URI is not known. URIRefs are also used to represent the properties/predicates in the RDF graph. Literals represent attribute values, such as a name, a date, a number, etc. Nodes can be created by the constructors of the node classes: from rdflib import URIRef, BNode, Literal bob= URIRef("http://example.org/people/Bob") linda= BNode() # a GUID is generated name= Literal('Bob') # passing a string age= Literal(24) # passing a python int height= Literal(76.5) # passing a python float 1.3. Creating RDF triples 5 rdflib Documentation, Release 4.2.2 Literals can be created from python objects, this creates data-typed literals, for the details on the mapping see Literals. For creating many URIRefs in the same namespace, i.e. URIs with the same prefix, RDFLib has the rdflib.namespace.Namespace class: from rdflib import Namespace n= Namespace("http://example.org/people/") n.bob # = rdflib.term.URIRef(u'http://example.org/people/bob') n.eve # = rdflib.term.URIRef(u'http://example.org/people/eve') This is very useful for schemas where all properties and classes have the same URI prefix, RDFLib pre-defines Names- paces for the most common RDF schemas: from rdflib.namespace import RDF, FOAF RDF.type # = rdflib.term.URIRef(u'http://www.w3.org/1999/02/22-rdf-syntax-ns#type') FOAF.knows # = rdflib.term.URIRef(u'http://xmlns.com/foaf/0.1/knows') 1.3.2 Adding Triples We already saw in Loading and saving RDF, how triples can be added with with the parse() function. Triples can also be added with the add() function: Graph.add((s, p, o)) Add a triple with self as context add() takes a 3-tuple of RDFLib nodes. Try the following with the nodes and namespaces we defined previously: from rdflib import Graph g= Graph() g.add( (bob, RDF.type, FOAF.Person) ) g.add( (bob, FOAF.name, name) ) g.add( (bob, FOAF.knows, linda) ) g.add( (linda, RDF.type, FOAF.Person) ) g.add( (linda, FOAF.name, Literal('Linda'))) print g.serialize(format='turtle') outputs: @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xml: <http://www.w3.org/XML/1998/namespace> . <http://example.org/people/Bob> afoaf:Person; foaf:knows[afoaf:Person; foaf:name "Linda"]; foaf:name "Bob". For some properties, only one value per resource makes sense (i.e they are functional properties, or have max- cardinality of 1). The set() method is useful for this: 6 Chapter 1. Getting started rdflib Documentation, Release 4.2.2 g.add( ( bob, FOAF.age, Literal(42))) print "Bob is", g.value( bob, FOAF.age ) # prints: Bob is 42 g.set( ( bob, FOAF.age, Literal(43))) # replaces 42 set above print "Bob is now", g.value( bob, FOAF.age ) # prints: Bob is now 43 rdflib.graph.Graph.value() is the matching query method, it will return a single value for a property, optionally raising an exception if there are more. You can also add triples by combining entire graphs, see Set Operations on RDFLib Graphs. Removing Triples Similarly, triples can be removed by a call to remove(): Graph.remove((s, p, o)) Remove a triple from the graph If the triple does not provide a context attribute, removes the triple from all contexts. When removing, it is possible to leave parts of the triple unspecified (i.e. passing None), this will remove all matching triples: g.remove( (bob, None, None)) # remove all triples about bob An example LiveJournal produces FOAF data for their users, but they seem to use foaf:member_name for a person’s full name. To align with data from other sources, it would be nice to have foaf:name act as a synonym for foaf:member_name (a poor man’s one-way owl:equivalentProperty): from rdflib.namespace import FOAF g.parse("http://danbri.livejournal.com/data/foaf") for s,_,n in g.triples((None, FOAF['member_name'], None)): g.add((s, FOAF['name'], n)) 1.4 Navigating Graphs An RDF Graph is a set of RDF triples, and we try to mirror exactly this in RDFLib, and the graph tries to emulate a container type: 1.4.1 Graphs as Iterators RDFLib graphs override __iter__() in order to support iteration over the contained triples: for subject,predicate,obj in someGraph: if not (subject,predicate,obj) in someGraph: raise Exception("Iterator / Container Protocols are Broken!!") 1.4.
Recommended publications
  • A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage
    heritage Article A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage Ikrom Nishanbaev 1,* , Erik Champion 1,2,3 and David A. McMeekin 4,5 1 School of Media, Creative Arts, and Social Inquiry, Curtin University, Perth, WA 6845, Australia; [email protected] 2 Honorary Research Professor, CDHR, Sir Roland Wilson Building, 120 McCoy Circuit, Acton 2601, Australia 3 Honorary Research Fellow, School of Social Sciences, FABLE, University of Western Australia, 35 Stirling Highway, Perth, WA 6907, Australia 4 School of Earth and Planetary Sciences, Curtin University, Perth, WA 6845, Australia; [email protected] 5 School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA 6845, Australia * Correspondence: [email protected] Received: 14 July 2020; Accepted: 4 August 2020; Published: 12 August 2020 Abstract: Recently, many Resource Description Framework (RDF) data generation tools have been developed to convert geospatial and non-geospatial data into RDF data. Furthermore, there are several interlinking frameworks that find semantically equivalent geospatial resources in related RDF data sources. However, many existing Linked Open Data sources are currently sparsely interlinked. Also, many RDF generation and interlinking frameworks require a solid knowledge of Semantic Web and Geospatial Semantic Web concepts to successfully deploy them. This article comparatively evaluates features and functionality of the current state-of-the-art geospatial RDF generation tools and interlinking frameworks. This evaluation is specifically performed for cultural heritage researchers and professionals who have limited expertise in computer programming. Hence, a set of criteria has been defined to facilitate the selection of tools and frameworks.
    [Show full text]
  • V a Lida T in G R D F Da
    Series ISSN: 2160-4711 LABRA GAYO • ET AL GAYO LABRA Series Editors: Ying Ding, Indiana University Paul Groth, Elsevier Labs Validating RDF Data Jose Emilio Labra Gayo, University of Oviedo Eric Prud’hommeaux, W3C/MIT and Micelio Iovka Boneva, University of Lille Dimitris Kontokostas, University of Leipzig VALIDATING RDF DATA This book describes two technologies for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL), the rationales for their designs, a comparison of the two, and some example applications. RDF and Linked Data have broad applicability across many fields, from aircraft manufacturing to zoology. Requirements for detecting bad data differ across communities, fields, and tasks, but nearly all involve some form of data validation. This book introduces data validation and describes its practical use in day-to-day data exchange. The Semantic Web offers a bold, new take on how to organize, distribute, index, and share data. Using Web addresses (URIs) as identifiers for data elements enables the construction of distributed databases on a global scale. Like the Web, the Semantic Web is heralded as an information revolution, and also like the Web, it is encumbered by data quality issues. The quality of Semantic Web data is compromised by the lack of resources for data curation, for maintenance, and for developing globally applicable data models. At the enterprise scale, these problems have conventional solutions. Master data management provides an enterprise-wide vocabulary, while constraint languages capture and enforce data structures. Filling a need long recognized by Semantic Web users, shapes languages provide models and vocabularies for expressing such structural constraints.
    [Show full text]
  • Design and Analysis of a Query Processor for Brick
    Design and Analysis of a Query Processor for Brick Gabe Fierro David E. Culler UC Berkeley UC Berkeley [email protected] [email protected] ABSTRACT to increase energy efficiency and comfort, as well as provide mon- Brick is a recently proposed metadata schema and ontology for de- itoring and fault diagnosis. While many such applications exist, scribing building components and the relationships between them. the lack of a common description scheme, i.e. metadata, limits the It represents buildings as directed labeled graphs using the RDF portability of these applications across the heterogeneous building data model. Using the SPARQL query language, building-agnostic stock. applications query a Brick graph to discover the set of resources While several efforts address the heterogeneity of building meta- and relationships they require to operate. Latency-sensitive ap- data, these generally fail to capture the relationships and entities plications, such as user interfaces, demand response and model- that are required by real-world applications [8]. This set of re- predictive control, require fast queries — conventionally less than quirements drove the development of Brick [6], a recently proposed 100ms. metadata standard for describing the set of entities and relationships We benchmark a set of popular open-source and commercial within a building. Brick succeeds along three metrics: completeness SPARQL databases against three real Brick models using seven (captures 98% of building management system data points across six application queries and find that none of them meet this perfor- real buildings), expressiveness (can capture all important relation- mance target. This lack of performance can be attributed to design ships) and usability (represents this information in an easy-to-use decisions that optimize for queries over large graphs consisting manner).
    [Show full text]
  • Semantics Developer's Guide
    MarkLogic Server Semantic Graph Developer’s Guide 2 MarkLogic 10 May, 2019 Last Revised: 10.0-8, October, 2021 Copyright © 2021 MarkLogic Corporation. All rights reserved. MarkLogic Server MarkLogic 10—May, 2019 Semantic Graph Developer’s Guide—Page 2 MarkLogic Server Table of Contents Table of Contents Semantic Graph Developer’s Guide 1.0 Introduction to Semantic Graphs in MarkLogic ..........................................11 1.1 Terminology ..........................................................................................................12 1.2 Linked Open Data .................................................................................................13 1.3 RDF Implementation in MarkLogic .....................................................................14 1.3.1 Using RDF in MarkLogic .........................................................................15 1.3.1.1 Storing RDF Triples in MarkLogic ...........................................17 1.3.1.2 Querying Triples .......................................................................18 1.3.2 RDF Data Model .......................................................................................20 1.3.3 Blank Node Identifiers ..............................................................................21 1.3.4 RDF Datatypes ..........................................................................................21 1.3.5 IRIs and Prefixes .......................................................................................22 1.3.5.1 IRIs ............................................................................................22
    [Show full text]
  • Binary RDF for Scalable Publishing, Exchanging and Consumption in the Web of Data
    WWW 2012 – PhD Symposium April 16–20, 2012, Lyon, France Binary RDF for Scalable Publishing, Exchanging and Consumption in the Web of Data Javier D. Fernández 1;2 Supervised by: Miguel A. Martínez Prieto 1 and Claudio Gutierrez 2 1 Department of Computer Science, University of Valladolid (Spain) 2 Department of Computer Science, University of Chile (Chile) [email protected] ABSTRACT era and hence they share a document-centric view, providing The Web of Data is increasingly producing large RDF data- human-focused syntaxes, disregarding large data. sets from diverse fields of knowledge, pushing the Web to In a typical scenario within the current state-of-the-art, a data-to-data cloud. However, traditional RDF represen- efficient interchange of RDF data is limited, at most, to com- tations were inspired by a document-centric view, which pressing the verbose plain data with universal compression results in verbose/redundant data, costly to exchange and algorithms. The resultant file has no logical structure and post-process. This article discusses an ongoing doctoral the- there is no agreed way to efficiently publish such data, i.e., sis addressing efficient formats for publication, exchange and to make them (publicly) available for diverse purposes and consumption of RDF on a large scale. First, a binary serial- users. In addition, the data are hardly usable at the time of ization format for RDF, called HDT, is proposed. Then, we consumption; the consumer has to decompress the file and, focus on compressed rich-functional structures which take then, to use an appropriate external tool (e.g.
    [Show full text]
  • Rdfa in XHTML: Syntax and Processing Rdfa in XHTML: Syntax and Processing
    RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing A collection of attributes and processing rules for extending XHTML to support RDF W3C Recommendation 14 October 2008 This version: http://www.w3.org/TR/2008/REC-rdfa-syntax-20081014 Latest version: http://www.w3.org/TR/rdfa-syntax Previous version: http://www.w3.org/TR/2008/PR-rdfa-syntax-20080904 Diff from previous version: rdfa-syntax-diff.html Editors: Ben Adida, Creative Commons [email protected] Mark Birbeck, webBackplane [email protected] Shane McCarron, Applied Testing and Technology, Inc. [email protected] Steven Pemberton, CWI Please refer to the errata for this document, which may include some normative corrections. This document is also available in these non-normative formats: PostScript version, PDF version, ZIP archive, and Gzip’d TAR archive. The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007-2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported - 1 - How to Read this Document RDFa in XHTML: Syntax and Processing into a user’s desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo’s creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing.
    [Show full text]
  • Diplomová Práce Prostředky Sémantického Webu V Uživatelském
    Západočeská univerzita v Plzni Fakulta aplikovaných věd Katedra informatiky a výpočetní techniky Diplomová práce Prostředky sémantického webu v uživatelském rozhraní pro správu elektrofyziologických experimentů Plzeň 2020 Jan Palcút Místo této strany bude zadání práce. Prohlášení Prohlašuji, že jsem diplomovou práci vypracoval samostatně a výhradně s po- užitím citovaných pramenů. V Plzni dne 9. srpna 2020 Jan Palcút Poděkování Tímto bych chtěl poděkovat vedoucímu diplomové práce panu Ing. Romanovi Moučkovi, Ph.D. za cenné rady, připomínky a odborné vedení této práce. Abstract The work aims to verify the applicability of technologies and languages of the semantic web in creating an application for the management of electro- physiological experiments. The application will allow us to manage multiple experiments of the same characteristic type at once, display their metadata, and search in them. The theoretical part of the work describes the semantic web, selected data standards for electrophysiology, and the neuroinformat- ics laboratory of the University of West Bohemia in Pilsen. The analysis and design of the application describe requirements’ specifications and se- lect the data standard and technologies to implement the solution. Part of the proposal is also a description of the solution of writing metadata of experiments into the newly designed structure. Based on the design, the ap- plication for the selected data standard is implemented. The functionality of the application was verified on experiments of various types. Abstrakt Cílem této práce je ověření použitelnosti technologií a jazyků sémantického webu při tvorbě aplikace pro správu elektrofyziologických experimentů. Apli- kace umožní spravovat více experimentů stejného charakteristického typu najednou, zobrazovat jejich metadata a vyhledávat v nich.
    [Show full text]
  • Exercise Sheet 1
    Semantic Web, SS 2017 1 Exercise Sheet 1 RDF, RDFS and Linked Data Submit your solutions until Friday, 12.5.2017, 23h00 by uploading them to ILIAS. Later submissions won't be considered. Every solution should contain the name(s), email adress(es) and registration number(s) of its (co-)editor(s). Read the slides for more submission guidelines. 1 Knowledge Representation (4pts) 1.1 Knowledge Representation Humans usually express their knowledge in natural language. Why aren't we using, e.g., English for knowledge representation and the semantic web? 1.2 Terminological vs Concrete Knowledge What is the role of terminological knowledge (e.g. Every company is an organization.), and what is the role of facts (e.g. Microsoft is an organization. Microsoft is headquartered in Redmond.) in a semantic web system? Hint: Imagine an RDF ontology that contains either only facts (describing factual knowledge: states of aairs) or constraints (describing terminological knowledge: relations between con- cepts). What could a semantic web system do with it? Semantic Web, SS 2017 2 2 RDF (10pts) RDF graphs consist of triples having a subject, a predicate and an object. Dierent syntactic notations can be used in order to serialize RDF graphs. By now you have seen the XML and the Turtle syntax of RDF. In this task we will use the Notation3 (N3) format described at http://www.w3.org/2000/10/swap/Primer. Look at the following N3 le: @prefix model: <http://example.com/model1/> . @prefix cdk: <http://example.com/chemistrydevelopmentkit/> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
    [Show full text]
  • Using Shape Expressions (Shex) to Share RDF Data Models and to Guide Curation with Rigorous Validation B Katherine Thornton1( ), Harold Solbrig2, Gregory S
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repositorio Institucional de la Universidad de Oviedo Using Shape Expressions (ShEx) to Share RDF Data Models and to Guide Curation with Rigorous Validation B Katherine Thornton1( ), Harold Solbrig2, Gregory S. Stupp3, Jose Emilio Labra Gayo4, Daniel Mietchen5, Eric Prud’hommeaux6, and Andra Waagmeester7 1 Yale University, New Haven, CT, USA [email protected] 2 Johns Hopkins University, Baltimore, MD, USA [email protected] 3 The Scripps Research Institute, San Diego, CA, USA [email protected] 4 University of Oviedo, Oviedo, Spain [email protected] 5 Data Science Institute, University of Virginia, Charlottesville, VA, USA [email protected] 6 World Wide Web Consortium (W3C), MIT, Cambridge, MA, USA [email protected] 7 Micelio, Antwerpen, Belgium [email protected] Abstract. We discuss Shape Expressions (ShEx), a concise, formal, modeling and validation language for RDF structures. For instance, a Shape Expression could prescribe that subjects in a given RDF graph that fall into the shape “Paper” are expected to have a section called “Abstract”, and any ShEx implementation can confirm whether that is indeed the case for all such subjects within a given graph or subgraph. There are currently five actively maintained ShEx implementations. We discuss how we use the JavaScript, Scala and Python implementa- tions in RDF data validation workflows in distinct, applied contexts. We present examples of how ShEx can be used to model and validate data from two different sources, the domain-specific Fast Healthcare Interop- erability Resources (FHIR) and the domain-generic Wikidata knowledge base, which is the linked database built and maintained by the Wikimedia Foundation as a sister project to Wikipedia.
    [Show full text]
  • SWAD-Europe Deliverable 3.11: Developer Workshop Report 4 - Workshop on Semantic Web Storage and Retrieval
    Sat Jun 05 2004 22:31:44 Europe/London SWAD-Europe Deliverable 3.11: Developer Workshop Report 4 - Workshop on Semantic Web Storage and Retrieval Project name: Semantic Web Advanced Development for Europe (SWAD-Europe) Project Number: IST-2001-34732 Workpackage name: 3 Dissemination and Implementation Workpackage description: ☞http://www.w3.org/2001/sw/Europe/plan/workpackages/live/esw-wp-3 Deliverable title: 3.11 Developer Workshop Report 4 URI: ☞http://www.w3.org/2001/sw/Europe/reports/dev_workshop_report_4/ Author: Dave Beckett Abstract: This report summarises the fourth SWAD-Europe developer ☞Workshop on Semantic Web Storage and Retrieval which was held 13-14 November 2003 at Vrije Universiteit, Amsterdam, Netherlands and attended by 26 semantic web developers from Europe and the USA discussing practical aspects of developing and deploying semantic web storage and retrieval systems. STATUS: Completed 2004-01-12, updated 2004-01-13 with workshop evaluation details. Contents 1. ☞Introduction 2. ☞Summary 3. ☞Evaluation 4. ☞Outcomes 5. ☞Minutes 6. ☞Attendees 1. Introduction The workshop aimed to discuss: Implementation techniques and problems met Storage models and database schemas Aggregation and tracking provenance Using RDBMSes for semantic web storage Discussion of useful test data and queries for comparisons. Implementing RDF datatypes, entailment (from ☞RDF Semantics) 2. Summary The fourth SWAD-Europe workshop was on ☞Semantic Web Storage and Retrieval and held at Vrije Universiteit, Amsterdam, Netherlands organised by ☞Dave Beckett (ILRT) and hosted by ☞Frank van Harmelen from VU. The workshop participants were mostly technical developers and researchers from a ☞variety of Page 1 Sat Jun 05 2004 22:31:45 Europe/London organisations mostly in industry and education from Greece, UK, Slovenija, The Netherlands and Italy in Europe and from the United States.
    [Show full text]
  • 15 Years of Semantic Web: an Incomplete Survey
    Ku¨nstl Intell (2016) 30:117–130 DOI 10.1007/s13218-016-0424-1 TECHNICAL CONTRIBUTION 15 Years of Semantic Web: An Incomplete Survey 1 2 Birte Glimm • Heiner Stuckenschmidt Received: 7 November 2015 / Accepted: 5 January 2016 / Published online: 23 January 2016 Ó Springer-Verlag Berlin Heidelberg 2016 Abstract It has been 15 years since the first publications today’s Web: Uniform Resource Identifiers (URIs) for proposed the use of ontologies as a basis for defining assigning unique identifiers to resources, the HyperText information semantics on the Web starting what today is Markup Language (HTML) for specifying the formatting known as the Semantic Web Research Community. This of Web pages, and the Hypertext Transfer Protocol (HTTP) work undoubtedly had a significant influence on AI as a that allows for the retrieval of linked resources from across field and in particular the knowledge representation and the Web. Typically, the markup of standard Web pages Reasoning Community that quickly identified new chal- describes only the formatting and, hence, Web pages and lenges and opportunities in using Description Logics in a the navigation between them using hyperlinks is targeted practical setting. In this survey article, we will try to give towards human users (cf. Fig. 1). an overview of the developments the field has gone through In 2001, Tim Berners-Lee, James Hendler, and Ora in these 15 years. We will look at three different aspects: Lassila describe their vision for a Semantic Web [5]: the evolution of Semantic Web Language Standards, the The Semantic Web is not a separate Web but an evolution of central topics in the Semantic Web Commu- extension of the current one, in which information is nity and the evolution of the research methodology.
    [Show full text]
  • Where Is the Semantic Web? – an Overview of the Use of Embeddable Semantics in Austria
    Where Is The Semantic Web? – An Overview of the Use of Embeddable Semantics in Austria Wilhelm Loibl Institute for Service Marketing and Tourism Vienna University of Economics and Business, Austria [email protected] Abstract Improving the results of search engines and enabling new online applications are two of the main aims of the Semantic Web. For a machine to be able to read and interpret semantic information, this content has to be offered online first. With several technologies available the question arises which one to use. Those who want to build the software necessary to interpret the offered data have to know what information is available and in which format. In order to answer these questions, the author analysed the business websites of different Austrian industry sectors as to what semantic information is embedded. Preliminary results show that, although overall usage numbers are still small, certain differences between individual sectors exist. Keywords: semantic web, RDFa, microformats, Austria, industry sectors 1 Introduction As tourism is a very information-intense industry (Werthner & Klein, 1999), especially novel users resort to well-known generic search engines like Google to find travel related information (Mitsche, 2005). Often, these machines do not provide satisfactory search results as their algorithms match a user’s query against the (weighted) terms found in online documents (Berry and Browne, 1999). One solution to this problem lies in “Semantic Searches” (Maedche & Staab, 2002). In order for them to work, web resources must first be annotated with additional metadata describing the content (Davies, Studer & Warren., 2006). Therefore, anyone who wants to provide data online must decide on which technology to use.
    [Show full text]