Temporality in Semantic Web
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence an Architectural Overview
EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence An Architectural Overview EMC Information Infrastructure Solutions Abstract This white paper provides readers with an overall understanding of the EMC® Greenplum® Data Computing Appliance (DCA) architecture and its performance levels. It also describes how the DCA provides a scalable end-to- end data warehouse solution packaged within a manageable, self-contained data warehouse appliance that can be easily integrated into an existing data center. October 2010 Copyright © 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part number: H8039 EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence—An Architectural Overview 2 Table of Contents Executive summary ................................................................................................. -
Semantics Developer's Guide
MarkLogic Server Semantic Graph Developer’s Guide 2 MarkLogic 10 May, 2019 Last Revised: 10.0-8, October, 2021 Copyright © 2021 MarkLogic Corporation. All rights reserved. MarkLogic Server MarkLogic 10—May, 2019 Semantic Graph Developer’s Guide—Page 2 MarkLogic Server Table of Contents Table of Contents Semantic Graph Developer’s Guide 1.0 Introduction to Semantic Graphs in MarkLogic ..........................................11 1.1 Terminology ..........................................................................................................12 1.2 Linked Open Data .................................................................................................13 1.3 RDF Implementation in MarkLogic .....................................................................14 1.3.1 Using RDF in MarkLogic .........................................................................15 1.3.1.1 Storing RDF Triples in MarkLogic ...........................................17 1.3.1.2 Querying Triples .......................................................................18 1.3.2 RDF Data Model .......................................................................................20 1.3.3 Blank Node Identifiers ..............................................................................21 1.3.4 RDF Datatypes ..........................................................................................21 1.3.5 IRIs and Prefixes .......................................................................................22 1.3.5.1 IRIs ............................................................................................22 -
Binary RDF for Scalable Publishing, Exchanging and Consumption in the Web of Data
WWW 2012 – PhD Symposium April 16–20, 2012, Lyon, France Binary RDF for Scalable Publishing, Exchanging and Consumption in the Web of Data Javier D. Fernández 1;2 Supervised by: Miguel A. Martínez Prieto 1 and Claudio Gutierrez 2 1 Department of Computer Science, University of Valladolid (Spain) 2 Department of Computer Science, University of Chile (Chile) [email protected] ABSTRACT era and hence they share a document-centric view, providing The Web of Data is increasingly producing large RDF data- human-focused syntaxes, disregarding large data. sets from diverse fields of knowledge, pushing the Web to In a typical scenario within the current state-of-the-art, a data-to-data cloud. However, traditional RDF represen- efficient interchange of RDF data is limited, at most, to com- tations were inspired by a document-centric view, which pressing the verbose plain data with universal compression results in verbose/redundant data, costly to exchange and algorithms. The resultant file has no logical structure and post-process. This article discusses an ongoing doctoral the- there is no agreed way to efficiently publish such data, i.e., sis addressing efficient formats for publication, exchange and to make them (publicly) available for diverse purposes and consumption of RDF on a large scale. First, a binary serial- users. In addition, the data are hardly usable at the time of ization format for RDF, called HDT, is proposed. Then, we consumption; the consumer has to decompress the file and, focus on compressed rich-functional structures which take then, to use an appropriate external tool (e.g. -
Bi-Directional Transformation Between Normalized Systems Elements and Domain Ontologies in OWL
Bi-directional Transformation between Normalized Systems Elements and Domain Ontologies in OWL Marek Suchanek´ 1 a, Herwig Mannaert2, Peter Uhnak´ 3 b and Robert Pergl1 c 1Faculty of Information Technology, Czech Technical University in Prague, Thakurova´ 9, Prague, Czech Republic 2Normalized Systems Institute, University of Antwerp, Prinsstraat 13, Antwerp, Belgium 3NSX bvba, Wetenschapspark Universiteit Antwerpen, Galileilaan 15, 2845 Niel, Belgium Keywords: Ontology, Normalized Systems, Transformation, Model-driven Development, Ontology Engineering, Software Modelling. Abstract: Knowledge representation in OWL ontologies gained a lot of popularity with the development of Big Data, Artificial Intelligence, Semantic Web, and Linked Open Data. OWL ontologies are very versatile, and there are many tools for analysis, design, documentation, and mapping. They can capture concepts and categories, their properties and relations. Normalized Systems (NS) provide a way of code generation from a model of so-called NS Elements resulting in an information system with proven evolvability. The model used in NS contains domain-specific knowledge that can be represented in an OWL ontology. This work clarifies the potential advantages of having OWL representation of the NS model, discusses the design of a bi-directional transformation between NS models and domain ontologies in OWL, and describes its implementation. It shows how the resulting ontology enables further work on the analytical level and leverages the system design. Moreover, due to the fact that NS metamodel is metacircular, the transformation can generate ontology of NS metamodel itself. It is expected that the results of this work will help with the design of larger real-world applications as well as the metamodel and that the transformation tool will be further extended with additional features which we proposed. -
Exercise Sheet 1
Semantic Web, SS 2017 1 Exercise Sheet 1 RDF, RDFS and Linked Data Submit your solutions until Friday, 12.5.2017, 23h00 by uploading them to ILIAS. Later submissions won't be considered. Every solution should contain the name(s), email adress(es) and registration number(s) of its (co-)editor(s). Read the slides for more submission guidelines. 1 Knowledge Representation (4pts) 1.1 Knowledge Representation Humans usually express their knowledge in natural language. Why aren't we using, e.g., English for knowledge representation and the semantic web? 1.2 Terminological vs Concrete Knowledge What is the role of terminological knowledge (e.g. Every company is an organization.), and what is the role of facts (e.g. Microsoft is an organization. Microsoft is headquartered in Redmond.) in a semantic web system? Hint: Imagine an RDF ontology that contains either only facts (describing factual knowledge: states of aairs) or constraints (describing terminological knowledge: relations between con- cepts). What could a semantic web system do with it? Semantic Web, SS 2017 2 2 RDF (10pts) RDF graphs consist of triples having a subject, a predicate and an object. Dierent syntactic notations can be used in order to serialize RDF graphs. By now you have seen the XML and the Turtle syntax of RDF. In this task we will use the Notation3 (N3) format described at http://www.w3.org/2000/10/swap/Primer. Look at the following N3 le: @prefix model: <http://example.com/model1/> . @prefix cdk: <http://example.com/chemistrydevelopmentkit/> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . -
Microsoft's SQL Server Parallel Data Warehouse Provides
Microsoft’s SQL Server Parallel Data Warehouse Provides High Performance and Great Value Published by: Value Prism Consulting Sponsored by: Microsoft Corporation Publish date: March 2013 Abstract: Data Warehouse appliances may be difficult to compare and contrast, given that the different preconfigured solutions come with a variety of available storage and other specifications. Value Prism Consulting, a management consulting firm, was engaged by Microsoft® Corporation to review several data warehouse solutions, to compare and contrast several offerings from a variety of vendors. The firm compared each appliance based on publicly-available price and specification data, and on a price-to-performance scale, Microsoft’s SQL Server 2012 Parallel Data Warehouse is the most cost-effective appliance. Disclaimer Every organization has unique considerations for economic analysis, and significant business investments should undergo a rigorous economic justification to comprehensively identify the full business impact of those investments. This analysis report is for informational purposes only. VALUE PRISM CONSULTING MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. ©2013 Value Prism Consulting, LLC. All rights reserved. Product names, logos, brands, and other trademarks featured or referred to within this report are the property of their respective trademark holders in the United States and/or other countries. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this report may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. -
Data Models for Home Services
__________________________________________PROCEEDING OF THE 13TH CONFERENCE OF FRUCT ASSOCIATION Data Models for Home Services Vadym Kramar, Markku Korhonen, Yury Sergeev Oulu University of Applied Sciences, School of Engineering Raahe, Finland {vadym.kramar, markku.korhonen, yury.sergeev}@oamk.fi Abstract An ultimate penetration of communication technologies allowing web access has enriched a conception of smart homes with new paradigms of home services. Modern home services range far beyond such notions as Home Automation or Use of Internet. The services expose their ubiquitous nature by being integrated into smart environments, and provisioned through a variety of end-user devices. Computational intelligence require a use of knowledge technologies, and within a given domain, such requirement as a compliance with modern web architecture is essential. This is where Semantic Web technologies excel. A given work presents an overview of important terms, vocabularies, and data models that may be utilised in data and knowledge engineering with respect to home services. Index Terms: Context, Data engineering, Data models, Knowledge engineering, Semantic Web, Smart homes, Ubiquitous computing. I. INTRODUCTION In recent years, a use of Semantic Web technologies to build a giant information space has shown certain benefits. Rapid development of Web 3.0 and a use of its principle in web applications is the best evidence of such benefits. A traditional database design in still and will be widely used in web applications. One of the most important reason for that is a vast number of databases developed over years and used in a variety of applications varying from simple web services to enterprise portals. In accordance to Forrester Research though a growing number of document, or knowledge bases, such as NoSQL is not a hype anymore [1]. -
Common Sense Reasoning with the Semantic Web
Common Sense Reasoning with the Semantic Web Christopher C. Johnson and Push Singh MIT Summer Research Program Massachusetts Institute of Technology, Cambridge, MA 02139 [email protected], [email protected] http://groups.csail.mit.edu/dig/2005/08/Johnson-CommonSense.pdf Abstract Current HTML content on the World Wide Web has no real meaning to the computers that display the content. Rather, the content is just fodder for the human eye. This is unfortunate as in fact Web documents describe real objects and concepts, and give particular relationships between them. The goal of the World Wide Web Consortium’s (W3C) Semantic Web initiative is to formalize web content into Resource Description Framework (RDF) ontologies so that computers may reason and make decisions about content across the Web. Current W3C work has so far been concerned with creating languages in which to express formal Web ontologies and tools, but has overlooked the value and importance of implementing common sense reasoning within the Semantic Web. As Web blogging and news postings become more prominent across the Web, there will be a vast source of natural language text not represented as RDF metadata. Common sense reasoning will be needed to take full advantage of this content. In this paper we will first describe our work in converting the common sense knowledge base, ConceptNet, to RDF format and running N3 rules through the forward chaining reasoner, CWM, to further produce new concepts in ConceptNet. We will then describe an example in using ConceptNet to recommend gift ideas by analyzing the contents of a weblog. -
Procurement and Spend Fact and Dimension Modeling
Procurement and Spend Dimension and Fact Job Aid Contents Introduction .................................................................................................................................................. 2 Procurement and Spend – Purchase Orders Subject Area: ........................................................................ 10 Procurement and Spend – Purchase Requisitions Subject Area:................................................................ 14 Procurement and Spend – Purchase Receipts Subject Area:...................................................................... 17 1 Procurement and Spend Dimension and Fact Job Aid Introduction The purpose of this job aid is to provide an explanation of dimensional data modeling and of using dimensions and facts to build analyses within the Procurement and Spend Subject Areas. Dimensional Data Model The dimensional model is comprised of a fact table and many dimensional tables and used for calculating summarized data. Since Business Intelligence reports used in measuring the facts (aggregates) across various dimensions, dimensional data modeling is the preferred modeling technique in a BI environment. STARS - OBI data model based on Dimensional Modeling. The underlying database tables separated as Fact Tables and Dimension Tables. The dimension tables joined to fact tables with specific keys. This data model usually called Star Schema. The star schema separates business process data into facts, which hold the measurable, quantitative data about the business, and dimensions -
Integrating Fuzzy Logic in Ontologies
INTEGRATING FUZZY LOGIC IN ONTOLOGIES Silvia Calegari and Davide Ciucci Dipartimento di Informatica, Sistemistica e Comunicazione, Universita` degli Studi di Milano Bicocca, via Bicocca degli Arcimboldi 8, 20126 Milano, Italy Keywords: Concept modifiers, fuzzy logics, fuzzy ontologies, membership modifiers, KAON, ontology editor. Abstract: Ontologies have proved to be very useful in sharing concepts across applications in an unambiguous way. Nowadays, in ontology-based applications information is often vague and imprecise. This is a well-known problem especially for semantics-based applications, such as e-commerce, knowledge management, web por- tals, etc. In computer-aided reasoning, the predominant paradigm to manage vague knowledge is fuzzy set theory. This paper presents an enrichment of classical computational ontologies with fuzzy logic to create fuzzy ontologies. So, it is a step towards facing the nuances of natural languages with ontologies. Our pro- posal is developed in the KAON ontology editor, that allows to handle ontology concepts in an high-level environment. 1 INTRODUCTION A possible solution to handle uncertain data and, hence, to tackle these problems, is to incorporate fuzzy logic into ontologies. The aim of fuzzy set An ontology is a formal conceptualization of a partic- theory (Klir and Yuan, 1995) introduced by L. A. ular domain of interest shared among heterogeneous Zadeh (Zadeh, 1965) is to describe vague concepts applications. It consists of entities, attributes, rela- through a generalized notion of set, according to tionships and axioms to provide a common under- which an object may belong to a certain degree (typ- standing of the real world (Lammari and Mtais, 2004; ically a real number from the interval [0,1]) to a set. -
Master Data Management (MDM) Improves Information Quality to Deliver Value
MIT Information Quality Industry Symposium, July 15-17, 2009 Master Data Management (MDM) Improves Information Quality to Deliver Value ABSTRACT Master Data Management (MDM) reasserts IT’s role in responsibly managing business-critical information—master data—to reasonable quality and reliability standards. Master data includes information about products, customers, suppliers, locations, codes, and other critical business data. For most commercial businesses, governmental, and non-governmental organizations, master data is in a deplorable state. This is true although MDM is not new—it was, is, and will be IT’s job to remediate and control. Poor-quality master data is an unfunded enterprise liability. Bad data reduces the effectiveness of product promotions; it causes unnecessary confusion throughout the organization; it misleads corporate officers, regulators and shareholders; and it dramatically increases the IT cost burden with ceaseless and fruitless spending for incomplete error remediation projects. This presentation describes—with examples—why senior IT management should launch an MDM process, what governance they should institute for master data, and how they can build cost-effective MDM solutions for the enterprises they serve. BIOGRAPHY Joe Bugajski Senior Analyst Burton Group Joe Bugajski is a senior analyst for Burton Group. He covers IT governance, architecture, and data – governance, management, standards, access, quality, and integration. Prior to Burton Group, Joe was chief data officer at Visa responsible worldwide for information interoperability. He also served as Visa’s chief enterprise architect and resident entrepreneur covering risk systems and business intelligence. Prior to Visa, Joe was CEO of Triada, a business intelligence software provider. Before Triada, Joe managed engineering information systems for Ford Motor Company. -
Resource Description Framework Technologies in Chemistry Egon L Willighagen1* and Martin P Brändle2
Willighagen and Brändle Journal of Cheminformatics 2011, 3:15 http://www.jcheminf.com/content/3/1/15 EDITORIAL Open Access Resource description framework technologies in chemistry Egon L Willighagen1* and Martin P Brändle2 Editorial would like to thank Pfizer, Inc., who had partially The Resource Description Framework (RDF) is provid- funded the article processing charges for this Thematic ing the life sciences with new standards around data Series. Pfizer, Inc. has had no input into the content of and knowledge management. The uptake in the life the publication or the articles themselves. All articles in sciences is significantly higher than the uptake of the the series were independently prepared by the authors eXtensible Markup Language (XML) and even relational and were subjected to the journal’s standard peer review databases, as was recently shown by Splendiani et al. [1] process. Chemistry is adopting these methods too. For example, In the remainder of this editorial, we will briefly out- Murray-Rust and co-workers used RDF already in 2004 line the various RDF technologies and how they have to distribute news items where chemical structures were been used in chemistry so far. embedded using RDF Site Summary 1.0 [2]. Frey imple- mented a system which would now be referred to as an 1 Concepts electronic lab notebook (ELN) [3]. The use of the The core RDF specification was introduced by the SPARQL query language goes back to 2007 where it World Wide Web Consortium (W3C) in 1999 [6] and was used in a system to annotate crystal structures [4]. defines the foundation of the RDF technologies.