Rules and Ontologies for the Semantic Web*
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Hierarchical SVG Image Abstraction Layer for Medical Imaging
A Hierarchical SVG Image Abstraction Layer for Medical Imaging Edward Kim1, Xiaolei Huang1, Gang Tan1, L. Rodney Long2, Sameer Antani2 1Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA; 2Communications Engineering Branch, National Library of Medicine, Bethesda, MD ABSTRACT As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the “semantic gap”, a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible “layer” that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. -
Towards Ontology Based BPMN Implementation. Sophea Chhun, Néjib Moalla, Yacine Ouzrout
Towards ontology based BPMN Implementation. Sophea Chhun, Néjib Moalla, Yacine Ouzrout To cite this version: Sophea Chhun, Néjib Moalla, Yacine Ouzrout. Towards ontology based BPMN Implementation.. SKIMA, 6th Conference on Software Knowledge Information Management and Applications., Jan 2012, Chengdu, China. 8 p. hal-01551452 HAL Id: hal-01551452 https://hal.archives-ouvertes.fr/hal-01551452 Submitted on 6 Nov 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1 Towards ontology based BPMN implementation CHHUN Sophea, MOALLA Néjib and OUZROUT Yacine University of Lumiere Lyon2, laboratory DISP, France Natural language is understandable by human and not machine. None technical persons can only use natural language to specify their business requirements. However, the current version of Business process management and notation (BPMN) tools do not allow business analysts to implement their business processes without having technical skills. BPMN tool is a tool that allows users to design and implement the business processes by connecting different business tasks and rules together. The tools do not provide automatic implementation of business tasks from users’ specifications in natural language (NL). Therefore, this research aims to propose a framework to automatically implement the business processes that are expressed in NL requirements. -
Semantic Integration and Knowledge Discovery for Environmental Research
Journal of Database Management, 18(1), 43-67, January-March 2007 43 Semantic Integration and Knowledge Discovery for Environmental Research Zhiyuan Chen, University of Maryland, Baltimore County (UMBC), USA Aryya Gangopadhyay, University of Maryland, Baltimore County (UMBC), USA George Karabatis, University of Maryland, Baltimore County (UMBC), USA Michael McGuire, University of Maryland, Baltimore County (UMBC), USA Claire Welty, University of Maryland, Baltimore County (UMBC), USA ABSTRACT Environmental research and knowledge discovery both require extensive use of data stored in various sources and created in different ways for diverse purposes. We describe a new metadata approach to elicit semantic information from environmental data and implement semantics- based techniques to assist users in integrating, navigating, and mining multiple environmental data sources. Our system contains specifications of various environmental data sources and the relationships that are formed among them. User requests are augmented with semantically related data sources and automatically presented as a visual semantic network. In addition, we present a methodology for data navigation and pattern discovery using multi-resolution brows- ing and data mining. The data semantics are captured and utilized in terms of their patterns and trends at multiple levels of resolution. We present the efficacy of our methodology through experimental results. Keywords: environmental research, knowledge discovery and navigation, semantic integra- tion, semantic networks, -
A State-Of-The-Art Survey on Semantic Web Mining
Intelligent Information Management, 2013, 5, 10-17 http://dx.doi.org/10.4236/iim.2013.51002 Published Online January 2013 (http://www.scirp.org/journal/iim) A State-of-the-Art Survey on Semantic Web Mining Qudamah K. Quboa, Mohamad Saraee School of Computing, Science, and Engineering, University of Salford, Salford, UK Email: [email protected], [email protected] Received November 9, 2012; revised December 9, 2012; accepted December 16, 2012 ABSTRACT The integration of the two fast-developing scientific research areas Semantic Web and Web Mining is known as Se- mantic Web Mining. The huge increase in the amount of Semantic Web data became a perfect target for many re- searchers to apply Data Mining techniques on it. This paper gives a detailed state-of-the-art survey of on-going research in this new area. It shows the positive effects of Semantic Web Mining, the obstacles faced by researchers and propose number of approaches to deal with the very complex and heterogeneous information and knowledge which are pro- duced by the technologies of Semantic Web. Keywords: Web Mining; Semantic Web; Data Mining; Semantic Web Mining 1. Introduction precisely answer and satisfy the web users’ requests [1,5, 6]. Semantic Web is a part of the second generation web Semantic Web Mining is an integration of two important (Web2.0) and its original idea derived from the vision scientific areas: Semantic Web and Data Mining [1]. Se- W3C’s director and the WWW founder, Sir Tim Berners- mantic Web is used to give a meaning to data, creating complex and heterogeneous data structure, while Data Lee. -
Using Rule-Based Reasoning for RDF Validation
Using Rule-Based Reasoning for RDF Validation Dörthe Arndt, Ben De Meester, Anastasia Dimou, Ruben Verborgh, and Erik Mannens Ghent University - imec - IDLab Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium [email protected] Abstract. The success of the Semantic Web highly depends on its in- gredients. If we want to fully realize the vision of a machine-readable Web, it is crucial that Linked Data are actually useful for machines con- suming them. On this background it is not surprising that (Linked) Data validation is an ongoing research topic in the community. However, most approaches so far either do not consider reasoning, and thereby miss the chance of detecting implicit constraint violations, or they base them- selves on a combination of dierent formalisms, eg Description Logics combined with SPARQL. In this paper, we propose using Rule-Based Web Logics for RDF validation focusing on the concepts needed to sup- port the most common validation constraints, such as Scoped Negation As Failure (SNAF), and the predicates dened in the Rule Interchange Format (RIF). We prove the feasibility of the approach by providing an implementation in Notation3 Logic. As such, we show that rule logic can cover both validation and reasoning if it is expressive enough. Keywords: N3, RDF Validation, Rule-Based Reasoning 1 Introduction The amount of publicly available Linked Open Data (LOD) sets is constantly growing1, however, the diversity of the data employed in applications is mostly very limited: only a handful of RDF data is used frequently [27]. One of the reasons for this is that the datasets' quality and consistency varies signicantly, ranging from expensively curated to relatively low quality data [33], and thus need to be validated carefully before use. -
Semantic Integration Across Heterogeneous Databases Finding Data Correspondences Using Agglomerative Hierarchical Clustering and Artificial Neural Networks
DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Semantic Integration across Heterogeneous Databases Finding Data Correspondences using Agglomerative Hierarchical Clustering and Artificial Neural Networks MARK HOBRO KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Semantic Integration across Heterogeneous Databases Finding Data Correspondences using Agglomerative Hierarchical Clustering and Artificial Neural Networks MARK HOBRO Master in Computer Science Date: April 11, 2018 Supervisor: John Folkesson Examiner: Hedvig Kjellström Swedish title: Semantisk integrering mellan heterogena databaser: Hitta datakopplingar med hjälp av hierarkisk klustring och artificiella neuronnät School of Computer Science and Communication iii Abstract The process of data integration is an important part of the database field when it comes to database migrations and the merging of data. The research in the area has grown with the addition of machine learn- ing approaches in the last 20 years. Due to the complexity of the re- search field, no go-to solutions have appeared. Instead, a wide variety of ways of enhancing database migrations have emerged. This thesis examines how well a learning-based solution performs for the seman- tic integration problem in database migrations. Two algorithms are implemented. One that is based on informa- tion retrieval theory, with the goal of yielding a matching result that can be used as a benchmark for measuring the performance of the machine learning algorithm. The machine learning approach is based on grouping data with agglomerative hierarchical clustering and then training a neural network to recognize patterns in the data. This al- lows making predictions about potential data correspondences across two databases. -
Semantic Description of Web Services
Semantic Description of Web Services Thabet Slimani CS Department, Taif University, P.O.Box 888, 21974, KSA Abstract syntaxes) and in terms of the paradigms proposed for The tasks of semantic web service (discovery, selection, employing these in practice. composition, and execution) are supposed to enable seamless interoperation between systems, whereby human intervention is This paper is dedicated to provide an overview of these kept at a minimum. In the field of Web service description approaches, expressing their classification in terms of research, the exploitation of descriptions of services through commonalities and differences. It provides an semantics is a better support for the life-cycle of Web services. understanding of the technical foundation on which they The large number of developed ontologies, languages of are built. These techniques are classified from a range of representations, and integrated frameworks supporting the research areas including Top-down, Bottom-up and Restful discovery, composition and invocation of services is a good Approaches. indicator that research in the field of Semantic Web Services (SWS) has been considerably active. We provide in this paper a This paper does also provide some grounding that could detailed classification of the approaches and solutions, indicating help the reader perform a more detailed analysis of the their core characteristics and objectives required and provide different approaches which relies on the required indicators for the interested reader to follow up further insights objectives. We provide a little detailed comparison and details about these solutions and related software. between some approaches because this would require Keywords: SWS, SWS description, top-down approaches, addressing them from the perspective of some tasks bottom-up approaches, RESTful services. -
Answer Set Programming: a Primer?
Answer Set Programming: A Primer? Thomas Eiter1, Giovambattista Ianni2, and Thomas Krennwallner1 1 Institut fur¨ Informationssysteme, Technische Universitat¨ Wien Favoritenstraße 9-11, A-1040 Vienna, Austria feiter,[email protected] 2 Dipartimento di Matematica, Universita´ della Calabria, I-87036 Rende (CS), Italy [email protected] Abstract. Answer Set Programming (ASP) is a declarative problem solving para- digm, rooted in Logic Programming and Nonmonotonic Reasoning, which has been gaining increasing attention during the last years. This article is a gentle introduction to the subject; it starts with motivation and follows the historical development of the challenge of defining a semantics for logic programs with negation. It looks into positive programs over stratified programs to arbitrary programs, and then proceeds to extensions with two kinds of negation (named weak and strong negation), and disjunction in rule heads. The second part then considers the ASP paradigm itself, and describes the basic idea. It shows some programming techniques and briefly overviews Answer Set solvers. The third part is devoted to ASP in the context of the Semantic Web, presenting some formalisms and mentioning some applications in this area. The article concludes with issues of current and future ASP research. 1 Introduction Over the the last years, Answer Set Programming (ASP) [1–5] has emerged as a declar- ative problem solving paradigm that has its roots in Logic Programming and Non- monotonic Reasoning. This particular way of programming, in a language which is sometimes called AnsProlog (or simply A-Prolog) [6, 7], is well-suited for modeling and (automatically) solving problems which involve common sense reasoning: it has been fruitfully applied to a range of applications (for more details, see Section 6). -
Internship Report — Mpri M2 Advances in Holistic Ontology Alignment
Internship Report | Mpri M2 Advances in Holistic Ontology Alignment Internship Report | Mpri M2 Advances in Holistic Ontology Alignment Antoine Amarilli, supervised by Pierre Senellart, T´el´ecomParisTech General Context The development of the semantic Web has given birth to a large number of data sources (ontologies) with independent data and schemas. These ontologies are usually generated from existing relational databases or extracted from semi-structured data. Ontology alignment is a technique to automatically integrate them by discovering overlap in their instances and similarities in their schemas. Such integration makes it possible to access the combined information of all these data sources simultaneously rather than separately. Ontology alignment must deal with numerous challenges: the schemas are heterogeneous, the volume of data is huge, and the information can be partially incomplete or inconsistent. Problem Studied My internship focuses on the Paris (Probabilistic Alignment of Relations, Instances, and Schema) ontology alignment system [SAS11] developed by Fabian Suchanek, Pierre Senellart, and Serge Abiteboul at Inria Saclay's Webdam team. Paris is a generic system which aligns data and schema simultaneously (hence the adjective \holistic") and uses each of these alignments to cross-fertilize the other. The alignments produced are annotated with a confidence score and have a probabilistic interpretation. Paris was able to align large real-world datasets from the semantic Web. The aim of my internship is to propose improvements to Paris on some problems (both theoretical and practical) that were left open by the original system: achieving a better understanding of the behavior of Paris's equations, aligning heterogeneous schemas, being tolerant to differences in the strings of both ontologies, and improving the overall performance of the system. -
INPUT CONTRIBUTION Group Name:* MAS WG Title:* Semantic Web Best Practices
Semantic Web best practices INPUT CONTRIBUTION Group Name:* MAS WG Title:* Semantic Web best practices. Semantic Web Guidelines for domain knowledge interoperability to build the Semantic Web of Things Source:* Eurecom, Amelie Gyrard, Christian Bonnet Contact: Amelie Gyrard, [email protected], Christian Bonnet, [email protected] Date:* 2014-04-07 Abstract:* This contribution proposes to describe the semantic web best practices, semantic web tools, and existing domain ontologies for uses cases (smart home and health). Agenda Item:* Tbd Work item(s): MAS Document(s) Study of Existing Abstraction & Semantic Capability Enablement Impacted* Technologies for consideration by oneM2M. Intended purpose of Decision document:* Discussion Information Other <specify> Decision requested or This is an informative paper proposed by the French Eurecom institute as a recommendation:* guideline to MAS contributors on Semantic web best practices, as it was suggested during MAS#9.3 call. Amélie Gyrard is a new member in oneM2M (via ETSI PT1). oneM2M IPR STATEMENT Participation in, or attendance at, any activity of oneM2M, constitutes acceptance of and agreement to be bound by all provisions of IPR policy of the admitting Partner Type 1 and permission that all communications and statements, oral or written, or other information disclosed or presented, and any translation or derivative thereof, may without compensation, and to the extent such participant or attendee may legally and freely grant such copyright rights, be distributed, published, and posted on oneM2M’s web site, in whole or in part, on a non- exclusive basis by oneM2M or oneM2M Partners Type 1 or their licensees or assignees, or as oneM2M SC directs. -
Wiktionary Matcher
Wiktionary Matcher Jan Portisch1;2[0000−0001−5420−0663], Michael Hladik2[0000−0002−2204−3138], and Heiko Paulheim1[0000−0003−4386−8195] 1 Data and Web Science Group, University of Mannheim, Germany fjan, [email protected] 2 SAP SE Product Engineering Financial Services, Walldorf, Germany fjan.portisch, [email protected] Abstract. In this paper, we introduce Wiktionary Matcher, an ontology matching tool that exploits Wiktionary as external background knowl- edge source. Wiktionary is a large lexical knowledge resource that is collaboratively built online. Multiple current language versions of Wik- tionary are merged and used for monolingual ontology matching by ex- ploiting synonymy relations and for multilingual matching by exploiting the translations given in the resource. We show that Wiktionary can be used as external background knowledge source for the task of ontology matching with reasonable matching and runtime performance.3 Keywords: Ontology Matching · Ontology Alignment · External Re- sources · Background Knowledge · Wiktionary 1 Presentation of the System 1.1 State, Purpose, General Statement The Wiktionary Matcher is an element-level, label-based matcher which uses an online lexical resource, namely Wiktionary. The latter is "[a] collaborative project run by the Wikimedia Foundation to produce a free and complete dic- tionary in every language"4. The dictionary is organized similarly to Wikipedia: Everybody can contribute to the project and the content is reviewed in a com- munity process. Compared to WordNet [4], Wiktionary is significantly larger and also available in other languages than English. This matcher uses DBnary [15], an RDF version of Wiktionary that is publicly available5. The DBnary data set makes use of an extended LEMON model [11] to describe the data. -
Using Rule-Based Reasoning for RDF Validation
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Ghent University Academic Bibliography Using Rule-Based Reasoning for RDF Validation Dörthe Arndt, Ben De Meester, Anastasia Dimou, Ruben Verborgh, and Erik Mannens Ghent University - imec - IDLab Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium [email protected] Abstract. The success of the Semantic Web highly depends on its in- gredients. If we want to fully realize the vision of a machine-readable Web, it is crucial that Linked Data are actually useful for machines con- suming them. On this background it is not surprising that (Linked) Data validation is an ongoing research topic in the community. However, most approaches so far either do not consider reasoning, and thereby miss the chance of detecting implicit constraint violations, or they base them- selves on a combination of dierent formalisms, eg Description Logics combined with SPARQL. In this paper, we propose using Rule-Based Web Logics for RDF validation focusing on the concepts needed to sup- port the most common validation constraints, such as Scoped Negation As Failure (SNAF), and the predicates dened in the Rule Interchange Format (RIF). We prove the feasibility of the approach by providing an implementation in Notation3 Logic. As such, we show that rule logic can cover both validation and reasoning if it is expressive enough. Keywords: N3, RDF Validation, Rule-Based Reasoning 1 Introduction The amount of publicly available Linked Open Data (LOD) sets is constantly growing1, however, the diversity of the data employed in applications is mostly very limited: only a handful of RDF data is used frequently [27].