A Linked Geodata Map for Enabling Information Access
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source Jiaqing Liang12, Sheng Zhang1, Yanghua Xiao134∗ 1School of Computer Science, Shanghai Key Laboratory of Data Science Fudan University, Shanghai, China 2Shuyan Technology, Shanghai, China 3Shanghai Internet Big Data Engineering Technology Research Center, China 4Xiaoi Research, Shanghai, China [email protected], fshengzhang16,[email protected] Abstract However, most of these knowledge bases tend to be out- dated, which limits their utility. For example, in many knowl- Knowledge bases are playing an increasingly im- edge bases, Donald Trump is only a business man even af- portant role in many real-world applications. How- ter the inauguration. Obviously, it is important to let ma- ever, most of these knowledge bases tend to be chines know that Donald Trump is the president of the United outdated, which limits the utility of these knowl- States so that they can understand that the topic of an arti- edge bases. In this paper, we investigate how to cle mentioning Donald Trump is probably related to politics. keep the freshness of the knowledge base by syn- Moreover, new entities are continuously emerging and most chronizing it with its data source (usually ency- of them are popular, such as iphone 8. However, it is hard for clopedia websites). A direct solution is revisiting a knowledge base to cover these entities in time even if the the whole encyclopedia periodically and rerun the encyclopedia websites have already covered them. entire pipeline of the construction of knowledge freshness base like most existing methods. -
Learning Ontologies from RDF Annotations
/HDUQLQJÃRQWRORJLHVÃIURPÃ5')ÃDQQRWDWLRQV $OH[DQGUHÃ'HOWHLOÃ&DWKHULQHÃ)DURQ=XFNHUÃ5RVHÃ'LHQJ ACACIA project, INRIA, 2004, route des Lucioles, B.P. 93, 06902 Sophia Antipolis, France {Alexandre.Delteil, Catherine.Faron, Rose.Dieng}@sophia.inria.fr $EVWUDFW objects, as in [Mineau, 1990; Carpineto and Romano, 1993; Bournaud HWÃDO., 2000]. In this paper, we present a method for learning Since all RDF annotations are gathered inside a common ontologies from RDF annotations of Web RDF graph, the problem which arises is the extraction of a resources by systematically generating the most description for a given resource from the whole RDF graph. specific generalization of all the possible sets of After a brief description of the RDF data model (Section 2) resources. The preliminary step of our method and of RDF Schema (Section 3), Section 4 presents several consists in extracting (partial) resource criteria for extracting partial resource descriptions. In order descriptions from the whole RDF graph gathering to deal with the intrinsic complexity of the building of a all the annotations. In order to deal with generalization hierarchy, we propose an incremental algorithmic complexity, we incrementally build approach by gradually increasing the size of the descriptions the ontology by gradually increasing the size of the resource descriptions we consider. we consider. The principle of the approach is explained in Section 5 and more deeply detailed in Section 6. Ã ,QWURGXFWLRQ Ã 7KHÃ5')ÃGDWDÃPRGHO The Semantic Web, expected to be the next step that will RDF is the emerging Web standard for annotating resources lead the Web to its full potential, will be based on semantic HWÃDO metadata describing all kinds of Web resources. -
Rdfa in XHTML: Syntax and Processing Rdfa in XHTML: Syntax and Processing
RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing A collection of attributes and processing rules for extending XHTML to support RDF W3C Recommendation 14 October 2008 This version: http://www.w3.org/TR/2008/REC-rdfa-syntax-20081014 Latest version: http://www.w3.org/TR/rdfa-syntax Previous version: http://www.w3.org/TR/2008/PR-rdfa-syntax-20080904 Diff from previous version: rdfa-syntax-diff.html Editors: Ben Adida, Creative Commons [email protected] Mark Birbeck, webBackplane [email protected] Shane McCarron, Applied Testing and Technology, Inc. [email protected] Steven Pemberton, CWI Please refer to the errata for this document, which may include some normative corrections. This document is also available in these non-normative formats: PostScript version, PDF version, ZIP archive, and Gzip’d TAR archive. The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007-2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported - 1 - How to Read this Document RDFa in XHTML: Syntax and Processing into a user’s desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo’s creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing. -
A Comparison of Two Navigational Aids for Hypertext Mark Alan Satterfield Iowa State University
Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1992 A comparison of two navigational aids for hypertext Mark Alan Satterfield Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Business and Corporate Communications Commons, and the English Language and Literature Commons Recommended Citation Satterfield, Mark Alan, "A comparison of two navigational aids for hypertext" (1992). Retrospective Theses and Dissertations. 14376. https://lib.dr.iastate.edu/rtd/14376 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. A Comparison of two navigational aids for h3q5ertext by Mark Alan Satterfield A Thesis Submitted to the Gradxiate Facultyin Partial Fulfillment ofthe Requirements for the Degree of MASTER OF ARTS Department: English Major; English (Business and Technical Communication) Signatureshave been redactedforprivacy Iowa State University Ames, Iowa 1992 Copyright © Mark Alan Satterfield, 1992. All rights reserved. u TABLE OF CONTENTS Page ACKNOWLEDGEMENTS AN INTRODUCTION TO USER DISORIENTATION AND NAVIGATION IN HYPERTEXT 1 Navigation Aids 3 Backtrack 3 History 4 Bookmarks 4 Guided tours 5 Indexes 6 Browsers 6 Graphic browsers 7 Table-of-contents browsers 8 Theory of Navigation 8 Schemas ^ 9 Cognitive maps 9 Schemas and maps in text navigation 10 Context 11 Schemas, cognitive maps, and context 12 Metaphors for navigation ' 13 Studies of Navigation Effectiveness 15 Paper vs. -
The Web 2.0 Way of Learning with Technologies Herwig Rollett
Int. J. Learning Technology, Vol. X, No. Y, xxxx 1 The Web 2.0 way of learning with technologies Herwig Rollett* Know-Center, Inffeldgasse 21a A-8010 Graz, Austria E-mail: [email protected] *Corresponding author Mathias Lux Department for Information Technology University of Klagenfurt Universitätsstraße 65–67 A-9020 Klagenfurt, Austria E-mail: [email protected] Markus Strohmaier Department of Computer Science University of Toronto 40 St. George Street Toronto, Ontario M5S 2E4, Canada Know-Center, Inffeldgasse 21a A-8010 Graz, Austria E-mail: [email protected] Gisela Dösinger Know-Center, Inffeldgasse 21a A-8010 Graz, Austria E-mail: [email protected] Klaus Tochtermann Know-Center Institute of Knowledge Management Graz University of Technology Inffeldgasse 21a, A-8010 Graz, Austria E-mail: [email protected] Copyright © 200x Inderscience Enterprises Ltd. 2 H. Rollett et al. Abstract: While there is a lot of hype around various concepts associated with the term Web 2.0 in industry, little academic research has so far been conducted on the implications of this new approach for the domain of education. Much of what goes by the name of Web 2.0 can, in fact, be regarded as a new kind of learning technologies, and can be utilised as such. This paper explains the background of Web 2.0, investigates the implications for knowledge transfer in general, and then discusses its particular use in eLearning contexts with the help of short scenarios. The main challenge in the future will be to maintain essential Web 2.0 attributes, such as trust, openness, voluntariness and self-organisation, when applying Web 2.0 tools in institutional contexts. -
Hypertext Newswriting Effects on Satisfaction, Comprehension and Attitudes João Canavilhas Universidade Da Beira Interior Key-W
Hypertext newswriting effects on satisfaction, comprehension and attitudes João Canavilhas Universidade da Beira Interior Key-words: hypertext; web journalism; online jornalism Abstract More than a decade after the beginning of online journalism, the great expectations of this field of expertise remain unfulfilled. Hypertextuality, multimediality, and interactivity remain original marks as yet unexplored by journalism made on the web (though not for the web). In the specific case of hypertextuality, the subject of this study, its little use is linked with journalists’ time constraints, the absence of an organizational model not limited to copying printed papers, and the fear that readers may reject non-linear reading (Paul, 2005). The latter reason is likely what is most hindering the evolution of web journalism, despite the great potential of online news formed by networks of linked texts. To begin with, it frees the reader, giving them the option to create their own path of reading. Additionally, it offers extra contextual information about the subject, allowing readers to increase their knowledge on the subject according to their own interests. Despite the apparent advantages, such textual networks demand an additional effort of readers to interact with the theme. They must follow links, and skip from text to text, which may be perceived as obstructive to their reading. The passivity imposed by the most powerful medium, television, and the recipients’ deeply-rooted habit of linear readings raise an important question: Will readers value the hypertext contexts to such an extent that they will make that additional effort, or are the newspapers right in their fears of the readers’s reaction? In an attempt to answer this question, an experimental study was carried out assessing the effects of hypertext on levels satisfaction, perception, comprehension, and attitudes 1 of online news readers. -
An Analysis of Dewey Linked Data
Palabra Clave (La Plata) ISSN: 1853-9912 [email protected] Universidad Nacional de La Plata Argentina Classification on the Web: an analysis of Dewey Linked Data Tomoyose, Kazumi; Simionato Arakaki, Ana Carolina Classification on the Web: an analysis of Dewey Linked Data Palabra Clave (La Plata), vol. 9, núm. 2, 2020 Universidad Nacional de La Plata, Argentina Disponible en: http://www.redalyc.org/articulo.oa?id=350562513005 DOI: https://doi.org/10.24215/18539912e092 PDF generado a partir de XML-JATS4R por Redalyc Proyecto académico sin fines de lucro, desarrollado bajo la iniciativa de acceso abierto Palabra Clave (La Plata), 2020, 9(2), Abril-Septiembre, ISSN: 1853-9912 Avances de investigación Classification on the Web: an analysis of Dewey Linked Data La Clasificación en la web: un análisis de Dewey Linked Data Kazumi Tomoyose DOI: https://doi.org/10.24215/18539912e092 Universidade Federal de São Carlos, Brasil Redalyc: http://www.redalyc.org/articulo.oa? [email protected] id=350562513005 Ana Carolina Simionato Arakaki Universidade Federal de São Carlos, Brasil [email protected] Recepción: 23 Agosto 2019 Aprobación: 12 Noviembre 2019 Abstract: With the availability of information in the World Wide Web its access and retrieval by the users is facilitated, and the Library and Information Science (LIS) field’s knowledge and techniques can be applied to this environment in order to help with the process. e present study is descriptive, qualitative and exploratory, based on bibliographical sources, in which it was explored how the Classification discipline interacts with Linked Data, focusing on the analysis of Dewey Linked Data. -
KBART: Knowledge Bases and Related Tools
NISO-RP-9-2010 KBART: Knowledge Bases and Related Tools A Recommended Practice of the National Information Standards Organization (NISO) and UKSG Prepared by the NISO/UKSG KBART Working Group January 2010 i About NISO Recommended Practices A NISO Recommended Practice is a recommended "best practice" or "guideline" for methods, materials, or practices in order to give guidance to the user. Such documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs. This recommended practice may be revised or withdrawn at any time. For current information on the status of this publication contact the NISO office or visit the NISO website (www.niso.org). Published by National Information Standards Organization (NISO) One North Charles Street, Suite 1905 Baltimore, MD 21201 www.niso.org Copyright © 2010 by the National Information Standards Organization and the UKSG. All rights reserved under International and Pan-American Copyright Conventions. For noncommercial purposes only, this publication may be reproduced or transmitted in any form or by any means without prior permission in writing from the publisher, provided it is reproduced accurately, the source of the material is identified, and the NISO/UKSG copyright status is acknowledged. All inquires regarding translations into other languages or commercial reproduction or distribution should be addressed to: NISO, One North Charles Street, -
The Ontology Web Language (OWL) for a Multi-Agent Understating
The Ontology Web Language (OWL) for a Multi- Agent Understating System Mostafa M. Aref Zhengbo Zhou Department of Computer Science and Engineering University of Bridgeport, Bridgeport, CT 06601 email: [email protected] Abstract— Computer understanding is a challenge OWL is a Web Ontology Language. It is built on top of problem in Artificial Intelligence. A multi-agent system has RDF – Resource Definition Framework and written in XML. been developed to tackle this problem. Among its modules is It is a part of Semantic Web Vision, and is designed to be its knowledge base (vocabulary agents). This paper interpreted by computers, not for being read by people. discusses the use of the Ontology Web Language (OWL) to OWL became a W3C (World Wide Web Consortium) represent the knowledge base. An example of applying OWL Recommendation in February 2004 [2]. The OWL is a in sentence understanding is given. Followed by an language for defining and instantiating Web ontologies. evaluation of OWL. OWL ontology may include the descriptions of classes, properties, and their instances [3]. Given such ontology, the OWL formal semantics specifies how to derive its logical 1. INTRODUCTION consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. One of various definitions for Artificial Intelligence is “The study of how to make computers do things which, at the Section 2 describes a multi-agents understanding system. moment, people do better”[7]. From the definition of AI Section 3 gives a brief description of a newly standardized mentioned above, “Understanding” can be looked as the technique, Web Ontology Language—OWL. -
Feature Engineering for Knowledge Base Construction
Feature Engineering for Knowledge Base Construction Christopher Re´y Amir Abbas Sadeghiany Zifei Shany Jaeho Shiny Feiran Wangy Sen Wuy Ce Zhangyz yStanford University zUniversity of Wisconsin-Madison fchrismre, amirabs, zifei, jaeho.shin, feiran, senwu, [email protected] Abstract Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and in- formation integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are con- structed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user “think about features— not algorithms.” We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality. -
Efficient Inference and Learning in a Large Knowledge Base
Mach Learn DOI 10.1007/s10994-015-5488-x Efficient inference and learning in a large knowledge base Reasoning with extracted information using a locally groundable first-order probabilistic logic William Yang Wang1 · Kathryn Mazaitis1 · Ni Lao2 · William W. Cohen1 Received: 10 January 2014 / Accepted: 4 March 2015 © The Author(s) 2015 Abstract One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web- scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a proposi- tional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approxi- mate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to com- putation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that Editors: Gerson Zaverucha and Vítor Santos Costa. -
XHTML Rdfa Modules XHTML Rdfa Modules
XHTML RDFa Modules XHTML RDFa Modules XHTML RDFa Modules Modules to support RDF annotation of elements W3C Editor’s Draft 11 August 2007 This version: http://www.w3.org/MarkUp/2007/ED-xhtml-rdfa-20070811 Latest version: http://www.w3.org/TR/xhtml-rdfa Previous Editor’s Draft: http://www.w3.org/MarkUp/2007/ED-xhtml-rdfa-20070705 Diff from previous Editor’s Draft: xhtml-rdfa-diff.html Editors: Mark Birbeck, x-port.net Ltd. Shane McCarron, Applied Testing and Technology, Inc. This document is also available in these non-normative formats: PostScript version, PDF version, ZIP archive, and Gzip’d TAR archive. The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract The XHTML RDFa (RDF using attributes) modules define a collection of elements and attributes that enhance a document authors ability to annotate the relationships of content within and among documents. These modules can be integrated into any markup language based upon XHTML Modularization [XHTMLMOD [p.45] ]. Status of This Document This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/. - 1 - Table of Contents XHTML RDFa Modules This document is an internal editors draft for development purposes. However, its content is based upon mature materials from [XHTML2 [p.45] ] and is therefore considered nearly complete.