Representing Knowledge in the Semantic Web Oreste Signore W3C Office in Italy at CNR - Via G
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Semantic Web Core Technologies Semantic Web Core Technologies
12/20/13 Semantic Web Core Technologies Semantic Web Core Technologies Muhammad Alsharif, mhalsharif at gmail.com (A paper written under the guidance of Prof. Raj Jain) Download Abstract This is survey of semantic web primary infrastructure. Semantic web aims to link the data everywhere and make them understandable for machines as well as humans. Implementing the full vision of the semantic web has a long way to go but a number of its necessary building blocks are being standardized. In this paper, the three core technologies for linking web data, defining vocabularies, and querying information are going to be introduced and discussed. Keywords: web 3.0, semantic web, semantic web stack, resource description framework, RDF schema, web ontology language, SPARQL Table of Contents 1. Introduction 2. Semantic Web Stack 3. Linked Data 3.1. Resource Description Framework (RDF) 3.2. RDF vs XML 4. Vocabularies 4.1. RDFS 4.2. RDFS vs OWL 5. Query 5.1. SPARQL 5.2. SPARQL vs SQL 6. Summary 7. Acronyms 8. References 1. Introduction The main goal of the semantic web is to improve upon the existing “web of documents†to be a “web of dataâ€. The first version of the web (web 1.0) revolutionized the use of Internet in both academia and industry by introducing the idea of hyperlinking, which has interconnected billions of web pages over the globe and made documents accessible from multiple points. Web 2.0 has expanded over this concept by adding the user interactivity/participation element such as the dynamic creation of data on websites by content management systems (CMS) which has led to the emergence of social media such as blogs, flickr, youtube, twitter, etc. -
Mapping Spatiotemporal Data to RDF: a SPARQL Endpoint for Brussels
International Journal of Geo-Information Article Mapping Spatiotemporal Data to RDF: A SPARQL Endpoint for Brussels Alejandro Vaisman 1, * and Kevin Chentout 2 1 Instituto Tecnológico de Buenos Aires, Buenos Aires 1424, Argentina 2 Sopra Banking Software, Avenue de Tevuren 226, B-1150 Brussels, Belgium * Correspondence: [email protected]; Tel.: +54-11-3457-4864 Received: 20 June 2019; Accepted: 7 August 2019; Published: 10 August 2019 Abstract: This paper describes how a platform for publishing and querying linked open data for the Brussels Capital region in Belgium is built. Data are provided as relational tables or XML documents and are mapped into the RDF data model using R2RML, a standard language that allows defining customized mappings from relational databases to RDF datasets. In this work, data are spatiotemporal in nature; therefore, R2RML must be adapted to allow producing spatiotemporal Linked Open Data.Data generated in this way are used to populate a SPARQL endpoint, where queries are submitted and the result can be displayed on a map. This endpoint is implemented using Strabon, a spatiotemporal RDF triple store built by extending the RDF store Sesame. The first part of the paper describes how R2RML is adapted to allow producing spatial RDF data and to support XML data sources. These techniques are then used to map data about cultural events and public transport in Brussels into RDF. Spatial data are stored in the form of stRDF triples, the format required by Strabon. In addition, the endpoint is enriched with external data obtained from the Linked Open Data Cloud, from sites like DBpedia, Geonames, and LinkedGeoData, to provide context for analysis. -
How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source Jiaqing Liang12, Sheng Zhang1, Yanghua Xiao134∗ 1School of Computer Science, Shanghai Key Laboratory of Data Science Fudan University, Shanghai, China 2Shuyan Technology, Shanghai, China 3Shanghai Internet Big Data Engineering Technology Research Center, China 4Xiaoi Research, Shanghai, China [email protected], fshengzhang16,[email protected] Abstract However, most of these knowledge bases tend to be out- dated, which limits their utility. For example, in many knowl- Knowledge bases are playing an increasingly im- edge bases, Donald Trump is only a business man even af- portant role in many real-world applications. How- ter the inauguration. Obviously, it is important to let ma- ever, most of these knowledge bases tend to be chines know that Donald Trump is the president of the United outdated, which limits the utility of these knowl- States so that they can understand that the topic of an arti- edge bases. In this paper, we investigate how to cle mentioning Donald Trump is probably related to politics. keep the freshness of the knowledge base by syn- Moreover, new entities are continuously emerging and most chronizing it with its data source (usually ency- of them are popular, such as iphone 8. However, it is hard for clopedia websites). A direct solution is revisiting a knowledge base to cover these entities in time even if the the whole encyclopedia periodically and rerun the encyclopedia websites have already covered them. entire pipeline of the construction of knowledge freshness base like most existing methods. -
Learning Ontologies from RDF Annotations
/HDUQLQJÃRQWRORJLHVÃIURPÃ5')ÃDQQRWDWLRQV $OH[DQGUHÃ'HOWHLOÃ&DWKHULQHÃ)DURQ=XFNHUÃ5RVHÃ'LHQJ ACACIA project, INRIA, 2004, route des Lucioles, B.P. 93, 06902 Sophia Antipolis, France {Alexandre.Delteil, Catherine.Faron, Rose.Dieng}@sophia.inria.fr $EVWUDFW objects, as in [Mineau, 1990; Carpineto and Romano, 1993; Bournaud HWÃDO., 2000]. In this paper, we present a method for learning Since all RDF annotations are gathered inside a common ontologies from RDF annotations of Web RDF graph, the problem which arises is the extraction of a resources by systematically generating the most description for a given resource from the whole RDF graph. specific generalization of all the possible sets of After a brief description of the RDF data model (Section 2) resources. The preliminary step of our method and of RDF Schema (Section 3), Section 4 presents several consists in extracting (partial) resource criteria for extracting partial resource descriptions. In order descriptions from the whole RDF graph gathering to deal with the intrinsic complexity of the building of a all the annotations. In order to deal with generalization hierarchy, we propose an incremental algorithmic complexity, we incrementally build approach by gradually increasing the size of the descriptions the ontology by gradually increasing the size of the resource descriptions we consider. we consider. The principle of the approach is explained in Section 5 and more deeply detailed in Section 6. Ã ,QWURGXFWLRQ Ã 7KHÃ5')ÃGDWDÃPRGHO The Semantic Web, expected to be the next step that will RDF is the emerging Web standard for annotating resources lead the Web to its full potential, will be based on semantic HWÃDO metadata describing all kinds of Web resources. -
HTML5 for .NET Developers by Jim Jackson II Ian Gilman
S AMPLE CHAPTER Single page web apps, JavaScript, and semantic markup Jim Jackson II Ian Gilman FOREWORD BY Scott Hanselman MANNING HTML5 for .NET Developers by Jim Jackson II Ian Gilman Chapter 1 Copyright 2013 Manning Publications brief contents 1 ■ HTML5 and .NET 1 2 ■ A markup primer: classic HTML, semantic HTML, and CSS 33 3 ■ Audio and video controls 66 4 ■ Canvas 90 5 ■ The History API: Changing the game for MVC sites 118 6 ■ Geolocation and web mapping 147 7 ■ Web workers and drag and drop 185 8 ■ Websockets 214 9 ■ Local storage and state management 248 10 ■ Offline web applications 273 vii HTML5 and .NET This chapter covers ■ Understanding the scope of HTML5 ■ Touring the new features in HTML5 ■ Assessing where HTML5 fits in software projects ■ Learning what an HTML application is ■ Getting started with HTML applications in Visual Studio You’re really going to love HTML5. It’s like having a box of brand new toys in front of you when you have nothing else to do but play. Forget pushing the envelope; using HTML5 on the client and .NET on the server gives you the ability to create entirely new envelopes for executing applications inside browsers that just a few years ago would have been difficult to build even as desktop applications. The abil- ity to use the skills you already have to build robust and fault-tolerant .NET solu- tions for any browser anywhere gives you an advantage in the market that we hope to prove throughout this book. For instance, with HTML5, you can ■ Tap the new Geolocation API to locate your users anywhere -
Weaving a Web of Linked Resources
Semantic Web 8 (2017) 767–772 767 DOI 10.3233/SW-170284 IOS Press Editorial Weaving a Web of linked resources Fabien Gandon a,*, Marta Sabou b and Harald Sack c a Wimmics, Université Côte d’Azur, Inria, CNRS, I3S, Sophia Antipolis, France E-mail: [email protected] b CDL-Flex, Vienna University of Technology, Austria E-mail: [email protected] c FIZ Karlsruhe, Leibniz Institute for Information Infrastructure, KIT Karlsruhe, Germany E-mail: Harald.Sack@fiz-Karlsruhe.de Abstract. This editorial introduces the special issue based on the best papers from ESWC 2015. And since ESWC’15 marked 15 years of Semantic Web research, we extended this editorial to a position paper that reflects the path that we, as a community, traveled so far with the goal of transforming the Web of Pages to a Web of Resources. We discuss some of the key challenges, research topics and trends addressed by the Semantic Web community in its journey. We conclude that the symbiotic relation of our community with the Web requires a truly multidisciplinary research approach to support the Web’s diversity. Keywords: Semantic Web, trends, challenges 1. From a Web of pages to a Web of resources a first wave of deployment of the Semantic Web with the Linked Data principles and the Linked Open Data As we write this article Tim Berners-Lee just re- 5-Star rules [3] leading to the publication and growth ceived the Turing Award “for inventing the World of linked open datasets towards the Web of Data, as Wide Web, the first Web browser, and the fundamental depicted in Fig. -
KBART: Knowledge Bases and Related Tools
NISO-RP-9-2010 KBART: Knowledge Bases and Related Tools A Recommended Practice of the National Information Standards Organization (NISO) and UKSG Prepared by the NISO/UKSG KBART Working Group January 2010 i About NISO Recommended Practices A NISO Recommended Practice is a recommended "best practice" or "guideline" for methods, materials, or practices in order to give guidance to the user. Such documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs. This recommended practice may be revised or withdrawn at any time. For current information on the status of this publication contact the NISO office or visit the NISO website (www.niso.org). Published by National Information Standards Organization (NISO) One North Charles Street, Suite 1905 Baltimore, MD 21201 www.niso.org Copyright © 2010 by the National Information Standards Organization and the UKSG. All rights reserved under International and Pan-American Copyright Conventions. For noncommercial purposes only, this publication may be reproduced or transmitted in any form or by any means without prior permission in writing from the publisher, provided it is reproduced accurately, the source of the material is identified, and the NISO/UKSG copyright status is acknowledged. All inquires regarding translations into other languages or commercial reproduction or distribution should be addressed to: NISO, One North Charles Street, -
Microformats: Empowering Your Markup for Web 2.0
Microformats: Empowering Your Markup for Web 2.0 John Allsopp Microformats: Empowering Your Markup for Web 2.0 Copyright © 2007 by John Allsopp All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher. ISBN-13 (pbk): 978-1-59059814-6 ISBN-10 (pbk): 1-59059-814-8 Printed and bound in the United States of America 9 8 7 6 5 4 3 2 1 Trademarked names may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, we use the names only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. Distributed to the book trade worldwide by Springer-Verlag New York, Inc., 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax 201-348-4505, e-mail [email protected],or visit www.springeronline.com. For information on translations, please contact Apress directly at 2560 Ninth Street, Suite 219, Berkeley, CA 94710. Phone 510-549-5930, fax 510-549-5939, e-mail [email protected], or visit www.apress.com. The information in this book is distributed on an “as is” basis, without warranty. Although every precaution has been taken in the preparation of this work, neither the author(s) nor Apress shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. -
Reconciling OWL and Non-Monotonic Rules for the Semantic Web
Reconciling OWL and Non-monotonic Rules for the Semantic Web Matthias Knorr1 and Pascal Hitzler2 and Frederick Maier3 Abstract. We propose a description logic extending SROIQ In principle, the second rift cannot be completely overcome. Nev- (the description logic underlying OWL 2 DL) and at the same time ertheless, several decidable combinations of OWL and rules have encompassing some of the most prominent monotonic and non- been proposed in the literature, usually resulting in a hybrid formal- monotonic rule languages, in particular Datalog extended with the ism mixing the syntax and sometimes even the semantics of rules and answer set semantics. Our proposal could be considered a substan- description logics (see the survey sections of [18, 17]). The most re- tial contribution towards fulfilling the quest for a unifying logic for cent proposal [20] rests on the introduction of a new syntax construct the Semantic Web. As a case in point, two non-monotonic extensions to description logics, called nominal schemas, which results in a de- of description logics considered to be of distinct expressiveness until scription logic which seamlessly—syntactically and semantically— now are covered in our proposal. In contrast to earlier such propos- incorporates binary DL-safe Datalog [24] (and as we will see in Sec- als, our language has the “look and feel” of a description logic and tion 3.1, even incorporates unrestricted DL-safe Datalog). avoids hybrid or first-order syntaxes. While we would argue that the introduction of nominal schemas constitutes a major advance towards a reconciliation of OWL and rules, expanding OWL with nominal schemas by itself does nothing 1 Introduction to resolve the first of the rifts mentioned above. -
The Ontology Web Language (OWL) for a Multi-Agent Understating
The Ontology Web Language (OWL) for a Multi- Agent Understating System Mostafa M. Aref Zhengbo Zhou Department of Computer Science and Engineering University of Bridgeport, Bridgeport, CT 06601 email: [email protected] Abstract— Computer understanding is a challenge OWL is a Web Ontology Language. It is built on top of problem in Artificial Intelligence. A multi-agent system has RDF – Resource Definition Framework and written in XML. been developed to tackle this problem. Among its modules is It is a part of Semantic Web Vision, and is designed to be its knowledge base (vocabulary agents). This paper interpreted by computers, not for being read by people. discusses the use of the Ontology Web Language (OWL) to OWL became a W3C (World Wide Web Consortium) represent the knowledge base. An example of applying OWL Recommendation in February 2004 [2]. The OWL is a in sentence understanding is given. Followed by an language for defining and instantiating Web ontologies. evaluation of OWL. OWL ontology may include the descriptions of classes, properties, and their instances [3]. Given such ontology, the OWL formal semantics specifies how to derive its logical 1. INTRODUCTION consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. One of various definitions for Artificial Intelligence is “The study of how to make computers do things which, at the Section 2 describes a multi-agents understanding system. moment, people do better”[7]. From the definition of AI Section 3 gives a brief description of a newly standardized mentioned above, “Understanding” can be looked as the technique, Web Ontology Language—OWL. -
Enhanced Semantic Web Layered Architecture Model
NEW ASPECTS of APPLIED INFORMATICS, BIOMEDICAL ELECTRONICS & INFORMATICS and COMMUNICATIONS Enhanced Semantic Web Layered Architecture Model Islam H. Harb, Salah Abdel-Mageid, Hassan Farahat, M. S. Farag Comp. Sys. Engineer Comp. Sys. Engineer Comp. Sys. Engineer Math. Dept. Al-Azhar Univ. Al-Azhar Univ. Al-Azhar Univ. Al-Azhar Univ. Egypt. Egypt. Egypt. Egypt. [email protected], [email protected] [email protected] Abstract: - This paper introduces the traditional web and its limitations, and how these limitations can be overcome by putting the lights on a new interesting approach for the web which is the Semantic Web. The most common technologies that can be used to construct such smart web are discussed briefly, and then its current layered architecture models as proposed by Tim Berners-Lee and others are evaluated to alleviate discrepancies and weak points. An enhanced architecture obeying layered architecture evaluation criteria and standard principles is proposed. This enhanced model is evaluated and contrasted against other models. Key-Words: - Semantic web, ontology, XML, RDF, URI and Functional Layers. 1 Introduction the system engineering description and the layers integration. The Semantic Web may be considered as an This paper introduces an Enhanced Semantic Web evolution to this WWW which aims to make all the Layered Architecture Model. This enhanced model information and application data on the internet introduces a new architecture which follows the universally shared and machine processable in a very layered architecture as [12], [18]. This enhanced efficient way. It is an intelligent web which can proposed architecture alleviates the previous work understand the information semantics and services weak points. -
Feature Engineering for Knowledge Base Construction
Feature Engineering for Knowledge Base Construction Christopher Re´y Amir Abbas Sadeghiany Zifei Shany Jaeho Shiny Feiran Wangy Sen Wuy Ce Zhangyz yStanford University zUniversity of Wisconsin-Madison fchrismre, amirabs, zifei, jaeho.shin, feiran, senwu, [email protected] Abstract Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and in- formation integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are con- structed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user “think about features— not algorithms.” We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.