Ontology Alignment in the Biomedical Domain Using Entity Definitions And

Total Page:16

File Type:pdf, Size:1020Kb

Ontology Alignment in the Biomedical Domain Using Entity Definitions And Ontology Alignment in the Biomedical Domain Using Entity Definitions and Context Lucy Lu Wangy, Chandra Bhagavatula, Mark Neumann, Kyle Lo, Chris Wilhelm, and Waleed Ammar Allen Institute for Artificial Intelligence yDepartment of Biomedical Informatics and Medical Education, University of Washington Seattle, Washington, USA [email protected] Abstract that includes “Dentatorubral-pallidoluysian atro- phy” from OMIM). Ontology alignment is the task of identi- This poses a challenge for bioNLP applica- fying semantically equivalent entities from tions where multiple ontologies are needed for two given ontologies. Different ontologies grounding, but each concept must be repre- have different representations of the same sented by only one entity. For instance, in entity, resulting in a need to de-duplicate www.semanticscholar.org, scientific pub- entities when merging ontologies. We pro- lications related to carpal tunnel syndrome are pose a method for enriching entities in an linked to one of multiple entities derived from ontology with external definition and con- UMLS terminologies representing the same con- text information, and use this additional cept,2 making it hard to find all relevant papers on information for ontology alignment. We this topic. To address this challenge, we need to develop a neural architecture capable of automatically map semantically equivalent entities encoding the additional information when from one ontology to another. This task is referred available, and show that the addition of to as ontology alignment or ontology matching. external data results in an F1-score of Several methods have been applied to ontol- 0.69 on the Ontology Alignment Evalua- ogy alignment, including rule-based and statisti- tion Initiative (OAEI) largebio SNOMED- cal matchers. Existing matchers rely on entity fea- NCI subtask, comparable with the entity- tures such as names, synonyms, as well as rela- level matchers in a SOTA system. tionships to other entities (Shvaiko and Euzenat, 2013; Otero-Cerdeira et al., 2015). However, it 1 Introduction is unclear how to leverage the natural language Ontologies are used to ground lexical items in var- text associated with entities to improve predic- ious NLP tasks including entity linking, question tions. We address this limitation by incorporating answering, semantic parsing and information re- two types of natural language information (defini- trieval.1 In biomedicine, an abundance of on- tions and textual contexts) in a supervised learning tologies (e.g., MeSH, Gene Ontology) has been framework for ontology alignment. Since the def- developed for different purposes. Each ontology inition and textual contexts of an entity often pro- describes a large number of concepts in health- vide complementary information about the entity’s care, public health or biology, enabling the use of meaning, we hypothesize that incorporating them ontology-based NLP methods in biomedical appli- will improve model predictions. We also discuss cations. However, since these ontologies are typ- how to automatically derive labeled data for train- ically curated independently by different groups, ing the model by leveraging existing resources. In many important concepts are represented inconsis- particular, we make the following contributions: tently across ontologies (e.g., “Myoclonic Epilep- • We propose a novel neural architecture for on- sies, Progressive” in MeSH is a broader concept tology alignment and show how to effectively 1Ontological resources include ontologies, knowledgebases, 2See https://www.semanticscholar.org/ terminologies, and controlled vocabularies. In the rest of this topic/Carpal-tunnel-syndrome/248228 and paper, we refer to all of these with the term ‘ontology’ for https://www.semanticscholar.org/topic/ consistency. Carpal-Tunnel-Syndrome/3076 47 Proceedings of the BioNLP 2018 workshop, pages 47–55 Melbourne, Australia, July 19, 2018. c 2018 Association for Computational Linguistics Figure 1: OntoEmma consists of three modules: a) candidate selection (see x2.2 for details), b) feature generation (see x2.2 for details), and c) prediction (see x2.3 for deatils). OntoEmma accepts two ontolo- gies (a source and a target) as inputs, and outputs a list of alignments between their entities. When using a neural network, the feature generation and prediction model are combined together in the network. 4 integrate natural language inputs such as defini- and a list of usage contexts (econtexts). tions and contexts in this architecture (see x2 for details).3 2.2 Candidate selection and feature generation • We use the UMLS Metathesaurus to extract Many ontologies are large, which makes it compu- large amounts of labeled data for supervised tationally expensive to consider all possible pairs training of ontology alignment models (see of source and target entities for alignment. For x3.1). We release our data set to help future re- 3 example, the number of all possible entity pairs search in ontology alignment. in our training ontologies is on the order of 1011. • We use external resources such as Wikipedia In order to reduce the number of candidates, we and scientific articles to find entity definitions use an inexpensive low-precision, high-recall can- and contexts (see x3.2 for details). didate selection method using the inverse docu- ment frequency (idf ) of word tokens appearing in 2 OntoEmma entity names and definitions. For each source en- tity, we first retrieve all target entities that share In this section, we describe OntoEmma, our pro- a token with the source entity. Given the set of posed method for ontology matching, which con- shared word tokens ws+t between a source and sists of three stages: candidate selection, fea- target entity, we sum the idf of each token over the set, yielding idf = P idf(i). To- ture generation and prediction (see Fig.1 for an total iws+t overview). kens with higher idf values appear less frequently overall in the ontology and presumably contribute 2.1 Problem definition and notation more to the meaning of a specific entity. We com- pute the idf sum for each target entity and output We start by defining the ontology matching prob- the K = 50 target entities with the highest value lem: Given a source ontology Os and a target for each source entity, yielding jOsj×K candidate ontology Ot, each consisting of a set of entities, pairs. find all semantically equivalent entity pairs, i.e., For each candidate pair (es; et), we precompute f(es; et) 2 Os×Ot : es ≡ etg, where ≡ indicates a set of 32 features commonly used in the ontology semantic equivalence. For consistency, we prepro- matching literature including the token Jaccard cess entities from different ontologies to have the distance, stemmed token Jaccard distance, char- same set of attributes: a canonical name (e ), name acter n-gram Jaccard distance, root word equiv- a list of aliases (e ), a textual definition (e ), aliases def alence, and other boolean and probability values 3Implementation and data available at https://www. 4Some of these attributes may be missing or have low cover- github.com/allenai/ontoemma/ age. See x3.2 for coverage details. 48 Figure 2: Siamese network architecture for computing entity embeddings for each source and target entity in a candidate entity pair. over the entity name, aliases, and definition.5 are concatenated and used as the name vector vname. 2.3 Prediction • Each alias in ealiases is independently embed- Given a candidate pair (es; et) and the precom- ded using the same encoder used for canonical puted features f(es; et), we train a model to pre- names (with shared parameters), yielding a set dict the probability that the two entities are seman- of alias vectors valias−i for i = 1;:::; jealiasesj. tically equivalent. Figure2 illustrates the architec- • An entity definition edef is a sequence of tokens, ture of our neural model for estimating this proba- each encoded using pretrained embeddings then bility which resembles a siamese network (Brom- fed into a bi-directional LSTM. The definition ley et al., 1993). At a high level, we first encode vector vdef is the concatenation of the final hid- each of the source and target entities, then concate- den states in the forward and backward LSTMs. nate their representations and feed it into a multi- • Each context in e is independently em- layer perceptron ending with a sigmoid function contexts bedded using the same encoder used for defi- for estimating the probability of a match. Next, nitions (with shared parameters), then averaged we describe this architecture in more detail. yielding the context vector vcontexts. Entity embedding. left As shown in Fig.2( ), The name, alias, definition, and context vec- we encode the attributes of each entity as follows: tors are appended together to create the entity embedding, e.g., the source entity embedding es • A canonical name ename is a sequence of tokens, s s s s s is: v = [v ; v ∗ ; v ; v ]: In or- each encoded using pretrained word2vec em- name alias−i def contexts der to find representative aliases for a given pair beddings concatenated with a character-level of entities, we pick the source and target aliases convolutional neural network (CNN). The to- with the smallest Euclidean distance, i.e., i∗; j∗ = ken vectors feed into a bi-directional long short- s t arg mini;j kv − v k2 term memory network (LSTM) and the hidden alias−i alias−j layers at both ends of the bi-directional LSTM Siamese network. After the source and target entity embeddings are computed, they are fed into 5Even though neural models may obviate the need for feature two subnetworks with shared parameters followed engineering, feeding highly discriminative features into the neural model improves the inductive bias of the model and by a parameterized function for estimating similar- reduces the amount of labeled data needed for training. ity. Each subnetwork is a two layer feedforward 49 network with ReLU non-linearities and dropout Online Mendelian Inheritance in Man (OMIM), (Srivastava et al., 2014).
Recommended publications
  • Internship Report — Mpri M2 Advances in Holistic Ontology Alignment
    Internship Report | Mpri M2 Advances in Holistic Ontology Alignment Internship Report | Mpri M2 Advances in Holistic Ontology Alignment Antoine Amarilli, supervised by Pierre Senellart, T´el´ecomParisTech General Context The development of the semantic Web has given birth to a large number of data sources (ontologies) with independent data and schemas. These ontologies are usually generated from existing relational databases or extracted from semi-structured data. Ontology alignment is a technique to automatically integrate them by discovering overlap in their instances and similarities in their schemas. Such integration makes it possible to access the combined information of all these data sources simultaneously rather than separately. Ontology alignment must deal with numerous challenges: the schemas are heterogeneous, the volume of data is huge, and the information can be partially incomplete or inconsistent. Problem Studied My internship focuses on the Paris (Probabilistic Alignment of Relations, Instances, and Schema) ontology alignment system [SAS11] developed by Fabian Suchanek, Pierre Senellart, and Serge Abiteboul at Inria Saclay's Webdam team. Paris is a generic system which aligns data and schema simultaneously (hence the adjective \holistic") and uses each of these alignments to cross-fertilize the other. The alignments produced are annotated with a confidence score and have a probabilistic interpretation. Paris was able to align large real-world datasets from the semantic Web. The aim of my internship is to propose improvements to Paris on some problems (both theoretical and practical) that were left open by the original system: achieving a better understanding of the behavior of Paris's equations, aligning heterogeneous schemas, being tolerant to differences in the strings of both ontologies, and improving the overall performance of the system.
    [Show full text]
  • Wiktionary Matcher
    Wiktionary Matcher Jan Portisch1;2[0000−0001−5420−0663], Michael Hladik2[0000−0002−2204−3138], and Heiko Paulheim1[0000−0003−4386−8195] 1 Data and Web Science Group, University of Mannheim, Germany fjan, [email protected] 2 SAP SE Product Engineering Financial Services, Walldorf, Germany fjan.portisch, [email protected] Abstract. In this paper, we introduce Wiktionary Matcher, an ontology matching tool that exploits Wiktionary as external background knowl- edge source. Wiktionary is a large lexical knowledge resource that is collaboratively built online. Multiple current language versions of Wik- tionary are merged and used for monolingual ontology matching by ex- ploiting synonymy relations and for multilingual matching by exploiting the translations given in the resource. We show that Wiktionary can be used as external background knowledge source for the task of ontology matching with reasonable matching and runtime performance.3 Keywords: Ontology Matching · Ontology Alignment · External Re- sources · Background Knowledge · Wiktionary 1 Presentation of the System 1.1 State, Purpose, General Statement The Wiktionary Matcher is an element-level, label-based matcher which uses an online lexical resource, namely Wiktionary. The latter is "[a] collaborative project run by the Wikimedia Foundation to produce a free and complete dic- tionary in every language"4. The dictionary is organized similarly to Wikipedia: Everybody can contribute to the project and the content is reviewed in a com- munity process. Compared to WordNet [4], Wiktionary is significantly larger and also available in other languages than English. This matcher uses DBnary [15], an RDF version of Wiktionary that is publicly available5. The DBnary data set makes use of an extended LEMON model [11] to describe the data.
    [Show full text]
  • Term-Based Ontology Alignment
    Term-Based Ontology Alignment Virach Sornlertlamvanich, Canasai Kruengkrai, Shisanu Tongchim, Prapass Srichaivattana, and Hitoshi Isahara Thai Computational Linguistics Laboratory National Institute of Information and Communications Technology 112 Paholyothin Road, Klong 1, Klong Luang, Pathumthani 12120, Thailand {virach,canasai,shisanu,prapass}@tcllab.org, [email protected] Abstract. This paper presents an efficient approach to automatically align concepts between two ontologies. We propose an iterative algorithm that performs finding the most appropriate target concept for a given source concept based on the similarity of shared terms. Experimental results on two lexical ontologies, the MMT semantic hierarchy and the EDR concept dictionary, are given to show the feasibility of the proposed algorithm. 1 Introduction In this paper, we propose an efficient approach for finding alignments between two different ontologies. Specifically, we derive the source and the target on- tologies from available language resources, i.e. the machine readable dictionaries (MDRs). In our context, we consider the ontological concepts as the groups of lexical entries having similar or related meanings organized on a semantic hier- archy. The resulting ontology alignment can be used as a semantic knowledge for constructing multilingual dictionaries. Typically, bilingual dictionaries provide the relationship between their native language and English. One can extend these bilingual dictionaries to multilingual dictionaries by exploiting English as an intermediate source and associations between two concepts as semantic constraints. Aligningconceptsbetweentwoontologiesisoftendonebyhumans,whichis an expensive and time-consuming process. This motivates us to find an auto- matic method to perform such task. However, the hierarchical structures of two ontologies are quite different. The structural inconsistency is a common problem [1].
    [Show full text]
  • Wiktionary Matcher Results for OAEI 2020
    Wiktionary Matcher Results for OAEI 2020 Jan Portisch1;2[0000−0001−5420−0663] and Heiko Paulheim1[0000−0003−4386−8195] 1 Data and Web Science Group, University of Mannheim, Germany fjan, [email protected] 2 SAP SE Product Engineering Financial Services, Walldorf, Germany [email protected] Abstract. This paper presents the results of the Wiktionary Matcher in the Ontology Alignment Evaluation Initiative (OAEI) 2020. Wiktionary Matcher is an ontology matching tool that exploits Wiktionary as exter- nal background knowledge source. Wiktionary is a large lexical knowl- edge resource that is collaboratively built online. Multiple current lan- guage versions of Wiktionary are merged and used for monolingual on- tology matching by exploiting synonymy relations and for multilingual matching by exploiting the translations given in the resource. This is the second OAEI participation of the matching system. Wiktionary Matcher has been improved and is the best performing system on the knowledge graph track this year.3 Keywords: Ontology Matching · Ontology Alignment · External Re- sources · Background Knowledge · Wiktionary 1 Presentation of the System 1.1 State, Purpose, General Statement The Wiktionary Matcher is an element-level, label-based matcher which uses an online lexical resource, namely Wiktionary. The latter is "[a] collaborative project run by the Wikimedia Foundation to produce a free and complete dic- tionary in every language"4. The dictionary is organized similarly to Wikipedia: Everybody can contribute to the project and the content is reviewed in a com- munity process. Compared to WordNet [2], Wiktionary is significantly larger and also available in other languages than English. This matcher uses DBnary [13], an RDF version of Wiktionary that is publicly available5.
    [Show full text]
  • An Ontology-Based Approach to Manage Conflicts in Collaborative Design Moisés Lima Dutra
    An ontology-based approach to manage conflicts in collaborative design Moisés Lima Dutra To cite this version: Moisés Lima Dutra. An ontology-based approach to manage conflicts in collaborative design. Other [cs.OH]. Université Claude Bernard - Lyon I, 2009. English. NNT : 2009LYO10241. tel-00692473 HAL Id: tel-00692473 https://tel.archives-ouvertes.fr/tel-00692473 Submitted on 30 Apr 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE PRESENTEE DEVANT L’UNIVERSITE CLAUDE BERNARD LYON 1 POUR L’OBTENTION DU DIPLOME DE DOCTORAT EN INFORMATIQUE (ARRETE DU 7 AOUT 2006) PRESENTEE ET SOUTENUE PUBLIQUEMENT LE 27 NOVEMBRE 2009 PAR M. MOISÉS LIMA DUTRA AN ONTOLOGY-BASED APPROACH TO MANAGE CONFLICTS IN COLLABORATIVE DESIGN (Une approche basée sur les ontologies pour la gestion de conflits dans un environnement collaboratif) DIRECTEURS DE THESE : PARISA GHODOUS (UNIVERSITE CLAUDE BERNARD LYON 1) RICARDO GONÇALVES (“ORIENTADOR PELA FCT/UNL”, UNIVERSITE NOUVELLE DE LISBONNE) JURY : ARIS OUKSEL (RAPPORTEUR,PROFESSEUR DES UNIVERSITES, UNIVERSITE D’ILLINOIS,
    [Show full text]
  • Results of the Ontology Alignment Evaluation Initiative 2020⋆
    Results of the Ontology Alignment Evaluation Initiative 2020? Mina Abd Nikooie Pour1, Alsayed Algergawy2, Reihaneh Amini3, Daniel Faria4, Irini Fundulaki5, Ian Harrow6, Sven Hertling7, Ernesto Jimenez-Ruiz´ 8;9, Clement Jonquet10, Naouel Karam11, Abderrahmane Khiat12, Amir Laadhar10, Patrick Lambrix1, Huanyu Li1, Ying Li1, Pascal Hitzler3, Heiko Paulheim7, Catia Pesquita13, Tzanina Saveta5, Pavel Shvaiko14, Andrea Splendiani6, Elodie Thieblin´ 15,Cassia´ Trojahn16, Jana Vatasˇcinovˇ a´17, Beyza Yaman18, Ondrejˇ Zamazal17, and Lu Zhou3 1 Linkoping¨ University & Swedish e-Science Research Center, Linkoping,¨ Sweden fmina.abd.nikooie.pour,patrick.lambrix,huanyu.li,[email protected] 2 Friedrich Schiller University Jena, Germany [email protected] 3 Data Semantics (DaSe) Laboratory, Kansas State University, USA fluzhou,reihanea,[email protected] 4 BioData.pt, INESC-ID, Lisbon, Portugal [email protected] 5 Institute of Computer Science-FORTH, Heraklion, Greece fjsaveta,[email protected] 6 Pistoia Alliance Inc., USA fian.harrow,[email protected] 7 University of Mannheim, Germany fsven,[email protected] 8 City, University of London, UK [email protected] 9 Department of Informatics, University of Oslo, Norway [email protected] 10 LIRMM, University of Montpellier & CNRS, France fjonquet,[email protected] 11 Fraunhofer FOKUS, Berlin, Germany [email protected] 12 Fraunhofer IAIS, Sankt Augustin, Germany [email protected] 13 LASIGE, Faculdade de Ciencias,ˆ Universidade de Lisboa, Portugal [email protected] 14 TasLab, Trentino Digitale SpA, Trento, Italy [email protected] 15 Logilab, France [email protected] 16 IRIT & Universite´ Toulouse II, Toulouse, France [email protected] 17 University of Economics, Prague, Czech Republic fjana.vatascinova,[email protected] 18 ADAPT Centre, Dublin City University, Ireland beyza.yamanadaptcentre.ie Abstract.
    [Show full text]
  • User Validation in Ontology Alignment
    User validation in ontology alignment Zlatan Dragisic, Valentina Ivanova, Patrick Lambrix, Daniel Faria, Ernesto Jimenez-Ruiz and Catia Pesquita Book Chapter N.B.: When citing this work, cite the original article. Part of: The Semantic Web – ISWC 2016 : 15th International Semantic Web Conference, Kobe, Japan, October 17–21, 2016, Proceedings, Part I, Paul Groth, Elena Simperl, Alasdair Gray, Marta Sabou, Markus Krötzsch, Freddy Lecue, Fabian Flöck and Yolanda Gil (Eds), 2016, pp. 200-217. ISBN: 978-3-319-46522-7 Lecture Notes in Computer Science, 0302-9743 (print), 1611-3349 (online), No. 9981 DOI: http://dx.doi.org/10.1007/978-3-319-46523-4_13 Copyright: Springer Publishing Company Available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131806 User validation in ontology alignment Zlatan Dragisic1, Valentina Ivanova1, Patrick Lambrix1, Daniel Faria2, Ernesto Jimenez-Ruiz´ 3, and Catia Pesquita4 1 Linkoping¨ University and the Swedish e-Science Research Centre, Sweden 2 Gulbenkian Science Institute, Portugal 3 University of Oxford, UK 4 LaSIGE, Faculdade de Ciencias,ˆ Universidade de Lisboa, Portugal Abstract. User validation is one of the challenges facing the ontology alignment community, as there are limits to the quality of automated alignment algorithms. In this paper we present a broad study on user validation of ontology alignments that encompasses three distinct but interrelated aspects: the profile of the user, the services of the alignment system, and its user interface. We discuss key issues pertaining to the alignment validation process under each of these aspects, and provide an overview of how current systems address them.
    [Show full text]
  • Abstract Ontology Alignment Techniques for Linked Open
    ABSTRACT ONTOLOGY ALIGNMENT TECHNIQUES FOR LINKED OPEN DATA ONTOLOGIES by Chen Gu Ontology alignment (OA) addresses the Semantic Web challenge to enable information interoperability between related but heterogeneous ontologies. Traditional OA systems have focused on aligning well defined and structured ontologies from the same or closely related domains and producing equivalence mappings between concepts in the source and target ontologies. Linked Open Data (LOD) ontologies, however, present some different characteristics from standard ontologies. For example, equivalence relations are limited among LOD concepts; thus OA systems for LOD ontology alignment should be able to produce subclass and superclass mappings between the source and target. This thesis overviews the current research on aligning LOD ontologies. An important research aspect is the use of background knowledge in the alignment process. Two current OA systems are modified to perform alignment of LOD ontologies. For each modified OA system, experiments have been performed using a set of LOD reference alignments to evaluate their alignment results using standard OA performance measures. ONTOLOGY ALIGNMENT TECHNIQUES FOR LINKED OPEN DATA ONTOLOGIES A Thesis Submitted to the Faculty of Miami University in partial fulfillment of the requirements for the degree of Master of Computer Science Department of Computer Science and Software Engineering by Chen Gu Miami University Oxford, Ohio 2013 Advisor________________________________ Valerie Cross, PhD. Reader_________________________________
    [Show full text]
  • Alignment Incoherence in Ontology Matching
    Alignment Incoherence in Ontology Matching Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universitat¨ Mannheim vorgelegt von Christian Meilicke aus Bensheim Mannheim, 2011 Dekan: Professor Dr. Heinz Jurgen¨ Muller,¨ Universitat¨ Mannheim Referent: Professor Dr. Heiner Stuckenschmidt, Universitat¨ Mannheim Koreferent: Directeur de recherche Dr. habil Jer´ omeˆ Euzenat, INRIA Grenoble Tag der mundlichen¨ Prufung:¨ 21.10.2011 iii Abstract Ontology matching is the process of generating alignments between ontologies. An alignment is a set of correspondences. Each correspondence links concepts and properties from one ontology to concepts and properties from another ontology. Obviously, alignments are the key component to enable integration of knowledge bases described by different ontologies. For several reasons, alignments contain often erroneous correspondences. Some of these errors can result in logical con- flicts with other correspondences. In such a case the alignment is referred to as an incoherent alignment. The relevance of alignment incoherence and strategies to resolve alignment incoherence are in the center of this thesis. After an introduction to syntax and semantics of ontologies and alignments, the importance of alignment coherence is discussed from different perspectives. On the one hand, it is argued that alignment incoherence always coincides with the incorrectness of correspondences. On the other hand, it is demonstrated that the use of incoherent alignments results in severe problems for different types of applications. The main part of this thesis is concerned with techniques for resolving align- ment incoherence, i.e., how to find a coherent subset of an incoherent alignment that has to be preferred over other coherent subsets. The underlying theory is the theory of diagnosis.
    [Show full text]
  • Agreement Technologies
    Agreement Technologies Action IC0801 Semantics in Agreement Technologies Editors: Antoine Zimmermann, George Vouros, Axel Polleres Presented by: Antoine Zimmermann http://www.agreement-technologies.eu Semantics l Originally, semantics is the study of meaning l Semantics defines the relationship between symbols and what they denote Syntax Semantics means Apple (or refers to) Agreement Technologies: Applications Semantics in computer science l Formal semantics does not give access to the true meaning of symbols; l Formal semantics only constrain how a symbol can possibly be interpreted; l This allows computer systems to make automatic deductions. Universe Apple ⊆ Fruit Fruit Apple Granny-Smith ∊ Apple Granny-Smith ∊ Fruit Agreement Technologies: Applications Semantics in computer science l Formal semantics does not give access to the true meaning of symbols; l Formal semantics only constrain how a symbol can possibly be interpreted; l This allows computer systems to make automatic deductions. Universe ? ? XYZ ⊆ ABC XYZ ABC foo ∊ XYZ ? ? ? ? ? foo foo ∊ ABC ? ? ? ? ? ? ? ? Agreement Technologies: Applications Agreeing on a formal semantics Semantic Web l Brings standards from the W3C: − A common data model: RDF All of those have a − Ontology languages: RDFS and OWL standard formal − Rule interchange format: RIF semantics − Query language: SPARQL Agreement Technologies: Applications Standard formats but different ontologies... Car ⊆ Vehicle Automobile ⊆ Transportation = = To produce useful inferences with heterogeneous knowledge, ontologies must be aligned, i.e., correspondences must be found Agreement Technologies: Applications …different meanings Apple ⊆ Fruit Apple ∊ Company ≠ The same symbol does not always mean the same thing Agreement Technologies: Applications ...different modelling Clio3-RS ⊆ Car Clio3-RS ∊ RenaudCar myCar ∊ Clio3-RS = This is a class This is an instance Different granularity, different level of abstraction, different viewpoint, etc.
    [Show full text]
  • Ontology Matching Techniques: a Gold Standard Model
    Ontology Matching Techniques: A Gold Standard Model Alok Chauhan1, Vijayakumar V2 , Layth Sliman3 1,2School of Computing Science & Engineering, VIT University, Chennai; 3EFREI, PARIS [email protected], [email protected], [email protected] Abstract. Typically an ontology matching technique is a combination of much different type of matchers operating at various abstraction levels such as structure, semantic, syntax, instance etc. An ontology matching technique which employs matchers at all possible abstraction levels is expected to give, in general, best results in terms of precision, recall and F-measure due to improvement in matching opportunities and if we discount efficiency issues which may improve with better computing resources such as parallel processing. A gold standard ontology matching model is derived from a model classification of ontology matching techniques. A suitable metric is also defined based on gold standard ontology matching model. A review of various ontology matching techniques specified in recent research papers in the area was undertaken to categorize an ontology matching technique as per newly proposed gold standard model and a metric value for the whole group was computed. The results of the above study support proposed gold standard ontology matching model. Keywords: Ontology matching, gold standard, metric etc. 1 Introduction Reported slowing down in speed of improvement in the field of ontology matching is the motivation behind present work [1]. It requires a fresh look into the field of ontology matching. Ontology matching can be performed at various levels, criteria, and environments leading to different kind of techniques (Fig. 1). It may be done either locally at the element level or globally at the structure level of ontologies.
    [Show full text]
  • Addressing the Limited Adoption of Semantic Web for Data Integration
    Addressing the limited adoption of Semantic Web for data integration in AEC industry: Exploration of an RDF-based approach depending on available ontologies and literals matching Dimitrios Patsias* University of Twente, Faculty of Engineering Technology, P.O. Box 217, 7500 AE Enschede, The Netherlands *Corresponding author: [email protected] A B S T R A C T The limitations of Building Information Modeling (BIM) regarding the complete and integrated management support for building data, which are vast, diverse and distributed across different sources, systems and actors and which often need to be combined for the purpose of several analyses, stimulated the use of Semantic Web. The latter enables the uniform representation of heterogeneous data in structured graphs, using the Resource Description Framework (RDF) and common languages that describe relationships among these, namely ontologies. By deploying Semantic Web technologies, several research streams have focused on the integration of BIM with cross-domain data such as GIS, product and material data, sensor and cost data among others, by suggesting the development of several ontologies that represent concepts and relationships from different domains and the establishment of semantic links between them. At the same time, people without prior relevant awareness, perceive the concept of ontologies and tasks related to them as something too complicated. This is considered the main reason for the slow adoption of Semantic Web technologies, which is obvious within the AEC-FM industry as well. In this paper, the feasibility of a data integration approach that uses available ontologies and avoids ontology alignment is explored. For that reason, an approach based on the RDF representation of diverse datasets, the semantic description of these using available ontologies and the integration of these by matching literals between datasets, instead of establishing semantic links, is presented.
    [Show full text]