Experimenting with the Asthma Files: Digital Ethnography, Animating Collaboration
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Metadata for Semantic and Social Applications
etadata is a key aspect of our evolving infrastructure for information management, social computing, and scientific collaboration. DC-2008M will focus on metadata challenges, solutions, and innovation in initiatives and activities underlying semantic and social applications. Metadata is part of the fabric of social computing, which includes the use of wikis, blogs, and tagging for collaboration and participation. Metadata also underlies the development of semantic applications, and the Semantic Web — the representation and integration of multimedia knowledge structures on the basis of semantic models. These two trends flow together in applications such as Wikipedia, where authors collectively create structured information that can be extracted and used to enhance access to and use of information sources. Recent discussion has focused on how existing bibliographic standards can be expressed as Semantic Metadata for Web vocabularies to facilitate the ingration of library and cultural heritage data with other types of data. Harnessing the efforts of content providers and end-users to link, tag, edit, and describe their Semantic and information in interoperable ways (”participatory metadata”) is a key step towards providing knowledge environments that are scalable, self-correcting, and evolvable. Social Applications DC-2008 will explore conceptual and practical issues in the development and deployment of semantic and social applications to meet the needs of specific communities of practice. Edited by Jane Greenberg and Wolfgang Klas DC-2008 -
Provenance and Annotations for Linked Data
Proc. Int’l Conf. on Dublin Core and Metadata Applications 2013 Provenance and Annotations for Linked Data Kai Eckert University of Mannheim, Germany [email protected] Abstract Provenance tracking for Linked Data requires the identification of Linked Data resources. Annotating Linked Data on the level of single statements requires the identification of these statements. The concept of a Provenance Context is introduced as the basis for a consistent data model for Linked Data that incorporates current best-practices and creates identity for every published Linked Dataset. A comparison of this model with the Dublin Core Abstract Model is provided to gain further understanding, how Linked Data affects the traditional view on metadata and to what extent our approach could help to mediate. Finally, a linking mechanism based on RDF reification is developed to annotate single statements within a Provenance Context. Keywords: Provenance; Annotations; RDF; Linked Data; DCAM; DM2E; 1. Introduction This paper addresses two challenges faced by many Linked Data applications: How to provide, access, and use provenance information about the data; and how to enable data annotations, i.e., further statements about the data, subsets of the data, or even single statements. Both challenges are related as both require the existence of identifiers for the data. We use the Linked Data infrastructure that is currently developed in the DM2E project as an example with typical use- cases and resulting requirements. 1.1. Foundations Linked Data, the publication of data on the Web, enables easy access to data and supports the reuse of data. The Hypertext Transfer Protocol (HTTP) is used to access a Uniform Resource Identifier (URI) and to retrieve data about the resource. -
Semantics Developer's Guide
MarkLogic Server Semantic Graph Developer’s Guide 2 MarkLogic 10 May, 2019 Last Revised: 10.0-8, October, 2021 Copyright © 2021 MarkLogic Corporation. All rights reserved. MarkLogic Server MarkLogic 10—May, 2019 Semantic Graph Developer’s Guide—Page 2 MarkLogic Server Table of Contents Table of Contents Semantic Graph Developer’s Guide 1.0 Introduction to Semantic Graphs in MarkLogic ..........................................11 1.1 Terminology ..........................................................................................................12 1.2 Linked Open Data .................................................................................................13 1.3 RDF Implementation in MarkLogic .....................................................................14 1.3.1 Using RDF in MarkLogic .........................................................................15 1.3.1.1 Storing RDF Triples in MarkLogic ...........................................17 1.3.1.2 Querying Triples .......................................................................18 1.3.2 RDF Data Model .......................................................................................20 1.3.3 Blank Node Identifiers ..............................................................................21 1.3.4 RDF Datatypes ..........................................................................................21 1.3.5 IRIs and Prefixes .......................................................................................22 1.3.5.1 IRIs ............................................................................................22 -
Description Logics Emerge from Ivory Towers Deborah L
Description Logics Emerge from Ivory Towers Deborah L. McGuinness Stanford University, Stanford, CA, 94305 [email protected] Abstract: Description logic (DL) has existed as a field for a few decades yet somewhat recently have appeared to transform from an area of academic interest to an area of broad interest. This paper provides a brief historical perspective of description logic developments that have impacted their usability beyond just in universities and research labs and provides one perspective on the topic. Description logics (previously called terminological logics and KL-ONE-like systems) started with a motivation of providing a formal foundation for semantic networks. The first implemented DL system – KL-ONE – grew out of Brachman’s thesis [Brachman, 1977]. This work was influenced by the work on frame systems but was focused on providing a foundation for building term meanings in a semantically meaningful and unambiguous manner. It rejected the notion of maintaining an ever growing (seemingly adhoc) vocabulary of link and node names seen in semantic networks and instead embraced the notion of a fixed set of domain-independent “epistemological primitives” that could be used to construct complex, structured object descriptions. It included constructs such as “defines-an-attribute-of” as a built-in construct and expected terms like “has-employee” to be higher-level terms built up from the epistemological primitives. Higher level terms such as “has-employee” and “has-part-time-employee” could be related automatically based on term definitions instead of requiring a user to place links between them. In its original incarnation, this led to maintaining the motivation of semantic networks of providing broad expressive capabilities (since people wanted to be able to represent natural language applications) coupled with the motivation of providing a foundation of building blocks that could be used in a principled and well-defined manner. -
PDF Formats; the XHTML Version Has Active Links That You Can Follow
Semantic Web @ W3C: Activities, Recommendations and State of Adoption Athens, GA, USA, 2006-11-09 Ivan Herman, W3C Ivan Herman, W3C RDF(S), tools We have a solid specification since 2004: well defined (formal) semantics, clear RDF/XML syntax Lots of tools are available. Are listed on W3C’s wiki: RDF programming environment for 14+ languages, including C, C++, Python, Java, Javascript, Ruby, PHP,… (no Cobol or Ada yet ) 13+ Triple Stores, ie, database systems to store datasets 16+ general development tools (specialized editors, application builders, …) etc Ivan Herman, W3C RDF(S), tools (cont.) Note the large number of large corporations among the tool developers: Adobe, IBM, Software AG, Oracle, HP, Northrop Grumman, … …but the small companies and independent developers also play a major role! Some of the tools are Open Source, some are not; some are very mature, some are not : it is the usual picture of software tools, nothing special any more! Anybody can start developing RDF-based applications today Ivan Herman, W3C RDF(S), tools (cont.) There are lots of tutorials, overviews, or books around the wiki page on books lists 20+ (English) textbooks; 19+ proceedings for 2005 & 2006 alone… again, some of them good, some of them bad, just as with any other areas… Active developers’ communities Ivan Herman, W3C Large datasets are accumulating IngentaConnect bibliographic metadata storage: over 200 million triplets UniProt Protein Database: 262 million triplets RDF version of Wikipedia: more than 47 million triplets RDFS/OWL Representation -
Rdfa in XHTML: Syntax and Processing Rdfa in XHTML: Syntax and Processing
RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing A collection of attributes and processing rules for extending XHTML to support RDF W3C Recommendation 14 October 2008 This version: http://www.w3.org/TR/2008/REC-rdfa-syntax-20081014 Latest version: http://www.w3.org/TR/rdfa-syntax Previous version: http://www.w3.org/TR/2008/PR-rdfa-syntax-20080904 Diff from previous version: rdfa-syntax-diff.html Editors: Ben Adida, Creative Commons [email protected] Mark Birbeck, webBackplane [email protected] Shane McCarron, Applied Testing and Technology, Inc. [email protected] Steven Pemberton, CWI Please refer to the errata for this document, which may include some normative corrections. This document is also available in these non-normative formats: PostScript version, PDF version, ZIP archive, and Gzip’d TAR archive. The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007-2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported - 1 - How to Read this Document RDFa in XHTML: Syntax and Processing into a user’s desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo’s creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing. -
CODATA Workshop on Big Data Programme Book
Sponsor I CODACODATA S UU Co-Sponsors Organizer Workshop on Big Data for International Scientific Programmes CONTENTS I. Sponsoring Organizations International Council for Science (ICSU) I. Sponsoring Organizations 2 The International Council for Science (ICSU) is a the international scientific community to II. Programme 7 non-governmental organization with a global strengthen international science for the benefit of membership of national scientific bodies society. (121members, representing 141 countries) and international scientific unions (31 members). ICSU: www.icsu.org III. Remarks and Abstracts 13 ICSU mobilizes the knowledge and resources of Committee on Data for Science and Technology (CODATA) IV. Short Biography of Speakers 28 I CODACODATA S UU V. Conference Venue Layout 41 CODATA, the ICSU Committee on Data for Science challenges and ‘hot topics’ at the frontiers and Technology, was established in 1966 to meet of data science (through CODATA Task a need for an international coordinating body to Groups and Working Groups and other improve the management and preservation of initiatives). VI. General Information 43 scientific data. CODATA has been at the forefront 3. Developing data strategies for international of data science and data policy issues since that science programmes and supporting ICSU date. activities such as Future Earth and Integrated About Beijing 43 Research on Disaster Risk (IRDR) to address CODATA supports ICSU’s mission of ‘strengthening data management needs. international science for the benefit of society’ by ‘promoting improved scientific and technical data Through events like the Workshop on Big Data for About the Workshop Venue 43 management and use’. CODATA achieves this International Scientific Programmes and mission through three strands of activity: SciDataCon 2014, CODATA collaborates with 1. -
Description Logics
Description Logics Franz Baader1, Ian Horrocks2, and Ulrike Sattler2 1 Institut f¨urTheoretische Informatik, TU Dresden, Germany [email protected] 2 Department of Computer Science, University of Manchester, UK {horrocks,sattler}@cs.man.ac.uk Summary. In this chapter, we explain what description logics are and why they make good ontology languages. In particular, we introduce the description logic SHIQ, which has formed the basis of several well-known ontology languages, in- cluding OWL. We argue that, without the last decade of basic research in description logics, this family of knowledge representation languages could not have played such an important rˆolein this context. Description logic reasoning can be used both during the design phase, in order to improve the quality of ontologies, and in the deployment phase, in order to exploit the rich structure of ontologies and ontology based information. We discuss the extensions to SHIQ that are required for languages such as OWL and, finally, we sketch how novel reasoning services can support building DL knowledge bases. 1 Introduction The aim of this section is to give a brief introduction to description logics, and to argue why they are well-suited as ontology languages. In the remainder of the chapter we will put some flesh on this skeleton by providing more technical details with respect to the theory of description logics, and their relationship to state of the art ontology languages. More detail on these and other matters related to description logics can be found in [6]. Ontologies There have been many attempts to define what constitutes an ontology, per- haps the best known (at least amongst computer scientists) being due to Gruber: “an ontology is an explicit specification of a conceptualisation” [47].3 In this context, a conceptualisation means an abstract model of some aspect of the world, taking the form of a definition of the properties of important 3 This was later elaborated to “a formal specification of a shared conceptualisation” [21]. -
Enhancing JSON to RDF Data Conversion with Entity Type Recognition
Enhancing JSON to RDF Data Conversion with Entity Type Recognition Fellipe Freire, Crishane Freire and Damires Souza Academic Unit of Informatics, Federal Institute of Education, Science and Technology of Paraiba, João Pessoa, Brazil Keywords: RDF Data Conversion, Entity Type Recognition, Semi-Structured Data, JSON Documents, Semantics Usage. Abstract: Nowadays, many Web data sources and APIs make their data available on the Web in semi-structured formats such as JSON. However, JSON data cannot be directly used in the Web of data, where principles such as URIs and semantically named links are essential. Thus it is necessary to convert JSON data into RDF data. To this end, we have to consider semantics in order to provide data reference according to domain vocabularies. To help matters, we present an approach which identifies JSON metadata, aligns them with domain vocabulary terms and converts data into RDF. In addition, along with the data conversion process, we provide the identification of the semantically most appropriate entity types to the JSON objects. We present the definitions underlying our approach and results obtained with the evaluation. 1 INTRODUCTION principles of this work is using semantics provided by the knowledge domain of the data to enhance The Web has evolved into an interactive information their conversion to RDF data. To this end, we use network, allowing users and applications to share recommended open vocabularies which are data on a massive scale. To help matters, the Linked composed by a set of terms, i.e., classes and Data principles define a set of practices for properties useful to describe specific types of things publishing structured data on the Web aiming to (LOV Documentation, 2016). -
Linked Data Schemata: Fixing Unsound Foundations
Linked data schemata: fixing unsound foundations. Kevin Feeney, Gavin Mendel Gleason, Rob Brennan Knowledge and Data Engineering Group & ADAPT Centre, School of Computer Science & Statistics, Trinity College Dublin, Ireland Abstract. This paper describes an analysis, and the tools and methods used to produce it, of the practical and logical implications of unifying common linked data vocabularies into a single logical model. In order to support any type of reasoning or even just simple type-checking, the vocabularies that are referenced by linked data statements need to be unified into a complete model wherever they reference or reuse terms that have been defined in other linked data vocabularies. Strong interdependencies between vocabularies are common and a large number of logical and practical problems make this unification inconsistent and messy. However, the situation is far from hopeless. We identify a minimal set of necessary fixes that can be carried out to make a large number of widely-deployed vocabularies mutually compatible, and a set of wider-ranging recommendations for linked data ontology design best practice to help alleviate the problem in future. Finally we make some suggestions for improving OWL’s support for distributed authoring and ontology reuse in the wild. Keywords: Linked Data, Reasoning, Data Quality 1. Introduction One of the central tenets of the Linked Data movement is the reuse of terms from existing well- known vocabularies [Bizer09] when developing new schemata or datasets. The semantic web infrastructure, and the RDF, RDFS and OWL languages, support this with their inherently distributed and modular nature. In practice, through vocabulary reuse, linked data schemata adopt knowledge models that are based on multiple, independently devised ontologies that often exhibit varying definitional semantics [Hogan12]. -
Building Linked Data for Both Humans and Machines∗
Building Linked Data For Both Humans and Machines∗ y z x Wolfgang Halb Yves Raimond Michael Hausenblas Institute of Information Centre for Digital Music Institute of Information Systems & Information London, UK Systems & Information Management Management Graz, Austria Graz, Austria ABSTRACT Existing linked datasets such as [3] are slanted towards ma- In this paper we describe our experience with building the chines as the consumer. Although there are exceptions to riese dataset, an interlinked, RDF-based version of the Eu- this machine-first approach (cf. [13]), we strongly believe rostat data, containing statistical data about the European that satisfying both humans and machines from a single Union. The riese dataset (http://riese.joanneum.at), aims source is a necessary path to follow. at serving roughly 3 billion RDF triples, along with mil- lions of high-quality interlinks. Our contribution is twofold: We subscribe to the view that every LOD dataset can be Firstly, we suggest using RDFa as the main deployment understood as a Semantic Web application. Every Semantic mechanism, hence serving both humans and machines to Web application in turn is a Web application in the sense effectively and efficiently explore and use the dataset. Sec- that it should support a certain task for a human user. With- ondly, we introduce a new way of enriching the dataset out offering a state-of-the-art Web user interface, potential with high-quality links: the User Contributed Interlinking, end-users are scared away. Hence a Semantic Web applica- a Wiki-style way of adding semantic links to data pages. tion needs to have a nice outfit, as well. -
Linked Data Schemata: Fixing Unsound Foundations
Linked data schemata: fixing unsound foundations Editor(s): Amrapali Zaveri, Stanford University, USA; Dimitris Kontokostas, Universität Leipzig, Germany; Sebastian Hellmann, Universität Leipzig, Germany; Jürgen Umbrich, Wirtschaftsuniversität Wien, Austria Solicited review(s): Mathieu d’Aquin, The Open University, UK; Peter F. Patel-Schneider, Nuance Communications, USA; John McCrae, Insight Centre for Data Analytics, Ireland; One anonymous reviewer Kevin Chekov Feeney, Gavin Mendel Gleason, and Rob Brennan Knowledge and Data Engineering Group & ADAPT Centre, School of Computer Science & Statistics, Trinity College Dublin, Ireland Corresponding author’s e-mail: [email protected] Abstract. This paper describes our tools and method for an evaluation of the practical and logical implications of combining common linked data vocabularies into a single local logical model for the purpose of reasoning or performing quality evalua- tions. These vocabularies need to be unified to form a combined model because they reference or reuse terms from other linked data vocabularies and thus the definitions of those terms must be imported. We found that strong interdependencies between vocabularies are common and that a significant number of logical and practical problems make this model unification inconsistent. In addition to identifying problems, this paper suggests a set of recommendations for linked data ontology design best practice. Finally we make some suggestions for improving OWL’s support for distributed authoring and ontology reuse. Keywords: Linked Data, Reasoning, Data Quality 1. Introduction In order to validate any dataset which uses the adms:Asset term we must combine the adms ontolo- One of the central tenets of the linked data move- gy and the dcat ontology in order to ensure that ment is the reuse of terms from existing well-known dcat:Dataset is a valid class.