Modeling the NLP Research Domain Using Ontologies an Ontology Representation of NLP Concepts from a Research Perspective
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index
University of California Santa Barbara A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index A Thesis submitted in partial satisfaction of the requirements for the degree Master of Arts in Geography by Bo Yan Committee in charge: Professor Krzysztof Janowicz, Chair Professor Werner Kuhn Professor Emerita Helen Couclelis June 2016 The Thesis of Bo Yan is approved. Professor Werner Kuhn Professor Emerita Helen Couclelis Professor Krzysztof Janowicz, Committee Chair May 2016 A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index Copyright c 2016 by Bo Yan iii Acknowledgements I would like to thank the members of my committee for their guidance and patience in the face of obstacles over the course of my research. I would like to thank my advisor, Krzysztof Janowicz, for his invaluable input on my work. Without his help and encour- agement, I would not have been able to find the light at the end of the tunnel during the last stage of the work. Because he provided insight that helped me think out of the box. There is no better advisor. I would like to thank Yingjie Hu who has offered me numer- ous feedback, suggestions and inspirations on my thesis topic. I would like to thank all my other intelligent colleagues in the STKO lab and the Geography Department { those who have moved on and started anew, those who are still in the quagmire, and those who have just begun { for their support and friendship. Last, but most importantly, I would like to thank my parents for their unconditional love. -
Data Warehouse: an Integrated Decision Support Database Whose Content Is Derived from the Various Operational Databases
1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in DATABASE MANAGEMENT Topic Objective: At the end of this topic student will be able to: Understand the Contrasting basic concepts Understand the Database Server and Database Specified Understand the USER Clause Definition/Overview: Data: Stored representations of objects and events that have meaning and importance in the users environment. Information: Data that have been processed in such a way that they can increase the knowledge of the person who uses it. Metadata: Data that describes the properties or characteristics of end-user data and the context of that data. Database application: An application program (or set of related programs) that is used to perform a series of database activities (create, read, update, and delete) on behalf of database users. WWW.BSSVE.IN Data warehouse: An integrated decision support database whose content is derived from the various operational databases. Constraint: A rule that cannot be violated by database users. Database: An organized collection of logically related data. Entity: A person, place, object, event, or concept in the user environment about which the organization wishes to maintain data. Database management system: A software system that is used to create, maintain, and provide controlled access to user databases. www.bsscommunitycollege.in www.bssnewgeneration.in www.bsslifeskillscollege.in 2 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in Data dependence; data independence: With data dependence, data descriptions are included with the application programs that use the data, while with data independence the data descriptions are separated from the application programs. Data warehouse; data mining: A data warehouse is an integrated decision support database, while data mining (described in the topic introduction) is the process of extracting useful information from databases. -
A Survey of Top-Level Ontologies to Inform the Ontological Choices for a Foundation Data Model
A survey of Top-Level Ontologies To inform the ontological choices for a Foundation Data Model Version 1 Contents 1 Introduction and Purpose 3 F.13 FrameNet 92 2 Approach and contents 4 F.14 GFO – General Formal Ontology 94 2.1 Collect candidate top-level ontologies 4 F.15 gist 95 2.2 Develop assessment framework 4 F.16 HQDM – High Quality Data Models 97 2.3 Assessment of candidate top-level ontologies F.17 IDEAS – International Defence Enterprise against the framework 5 Architecture Specification 99 2.4 Terminological note 5 F.18 IEC 62541 100 3 Assessment framework – development basis 6 F.19 IEC 63088 100 3.1 General ontological requirements 6 F.20 ISO 12006-3 101 3.2 Overarching ontological architecture F.21 ISO 15926-2 102 framework 8 F.22 KKO: KBpedia Knowledge Ontology 103 4 Ontological commitment overview 11 F.23 KR Ontology – Knowledge Representation 4.1 General choices 11 Ontology 105 4.2 Formal structure – horizontal and vertical 14 F.24 MarineTLO: A Top-Level 4.3 Universal commitments 33 Ontology for the Marine Domain 106 5 Assessment Framework Results 37 F. 25 MIMOSA CCOM – (Common Conceptual 5.1 General choices 37 Object Model) 108 5.2 Formal structure: vertical aspects 38 F.26 OWL – Web Ontology Language 110 5.3 Formal structure: horizontal aspects 42 F.27 ProtOn – PROTo ONtology 111 5.4 Universal commitments 44 F.28 Schema.org 112 6 Summary 46 F.29 SENSUS 113 Appendix A F.30 SKOS 113 Pathway requirements for a Foundation Data F.31 SUMO 115 Model 48 F.32 TMRM/TMDM – Topic Map Reference/Data Appendix B Models 116 ISO IEC 21838-1:2019 -
Semi-Automated Ontology Based Question Answering System Open Access
Journal of Advanced Research in Computing and Applications 17, Issue 1 (2019) 1-5 Journal of Advanced Research in Computing and Applications Journal homepage: www.akademiabaru.com/arca.html ISSN: 2462-1927 Open Semi-automated Ontology based Question Answering System Access 1, Khairul Nurmazianna Ismail 1 Faculty of Computer Science, Universiti Teknologi MARA Melaka, Malaysia ABSTRACT Question answering system enable users to retrieve exact answer for questions submit using natural language. The demand of this system increases since it able to deliver precise answer instead of list of links. This study proposes ontology-based question answering system. Research consist explanation of question answering system architecture. This question answering system used semi-automatic ontology development (Ontology Learning) approach to develop its ontology. Keywords: Question answering system; knowledge Copyright © 2019 PENERBIT AKADEMIA BARU - All rights reserved base; ontology; online learning 1. Introduction In the era of WWW, people need to search and get information fast online or offline which make people rely on Question Answering System (QAS). One of the common and traditionally use QAS is Frequently Ask Question site which contains common and straight forward answer. Currently, QAS have emerge as powerful platform using various techniques such as information retrieval, Knowledge Base, Natural Language Processing and Hybrid Based which enable user to retrieve exact answer for questions posed in natural language using either pre-structured database or a collection of natural language documents [1,4]. The demand of QAS increases day by day since it delivers short, precise and question-specific answer [10]. QAS with using knowledge base paradigm are better in Restricted- domain QA system since it ability to focus [12]. -
Bi-Directional Transformation Between Normalized Systems Elements and Domain Ontologies in OWL
Bi-directional Transformation between Normalized Systems Elements and Domain Ontologies in OWL Marek Suchanek´ 1 a, Herwig Mannaert2, Peter Uhnak´ 3 b and Robert Pergl1 c 1Faculty of Information Technology, Czech Technical University in Prague, Thakurova´ 9, Prague, Czech Republic 2Normalized Systems Institute, University of Antwerp, Prinsstraat 13, Antwerp, Belgium 3NSX bvba, Wetenschapspark Universiteit Antwerpen, Galileilaan 15, 2845 Niel, Belgium Keywords: Ontology, Normalized Systems, Transformation, Model-driven Development, Ontology Engineering, Software Modelling. Abstract: Knowledge representation in OWL ontologies gained a lot of popularity with the development of Big Data, Artificial Intelligence, Semantic Web, and Linked Open Data. OWL ontologies are very versatile, and there are many tools for analysis, design, documentation, and mapping. They can capture concepts and categories, their properties and relations. Normalized Systems (NS) provide a way of code generation from a model of so-called NS Elements resulting in an information system with proven evolvability. The model used in NS contains domain-specific knowledge that can be represented in an OWL ontology. This work clarifies the potential advantages of having OWL representation of the NS model, discusses the design of a bi-directional transformation between NS models and domain ontologies in OWL, and describes its implementation. It shows how the resulting ontology enables further work on the analytical level and leverages the system design. Moreover, due to the fact that NS metamodel is metacircular, the transformation can generate ontology of NS metamodel itself. It is expected that the results of this work will help with the design of larger real-world applications as well as the metamodel and that the transformation tool will be further extended with additional features which we proposed. -
A Survey and Classification of Controlled Natural Languages
A Survey and Classification of Controlled Natural Languages ∗ Tobias Kuhn ETH Zurich and University of Zurich What is here called controlled natural language (CNL) has traditionally been given many different names. Especially during the last four decades, a wide variety of such languages have been designed. They are applied to improve communication among humans, to improve translation, or to provide natural and intuitive representations for formal notations. Despite the apparent differences, it seems sensible to put all these languages under the same umbrella. To bring order to the variety of languages, a general classification scheme is presented here. A comprehensive survey of existing English-based CNLs is given, listing and describing 100 languages from 1930 until today. Classification of these languages reveals that they form a single scattered cloud filling the conceptual space between natural languages such as English on the one end and formal languages such as propositional logic on the other. The goal of this article is to provide a common terminology and a common model for CNL, to contribute to the understanding of their general nature, to provide a starting point for researchers interested in the area, and to help developers to make design decisions. 1. Introduction Controlled, processable, simplified, technical, structured,andbasic are just a few examples of attributes given to constructed languages of the type to be discussed here. We will call them controlled natural languages (CNL) or simply controlled languages.Basic English, Caterpillar Fundamental English, SBVR Structured English, and Attempto Controlled English are some examples; many more will be presented herein. This article investigates the nature of such languages, provides a general classification scheme, and explores existing approaches. -
A Conceptual Framework for Constructing Distributed Object Libraries Using Gellish
A Conceptual Framework for Constructing Distributed Object Libraries using Gellish Master's Thesis in Computer Science Michael Rudi Henrichs [email protected] Parallel and Distributed Systems group Faculty of Electrical Engineering, Mathematics and Computer Science Delft University of Technology June 1, 2009 Student Michael Rudi Henrichs Studentnumber: 9327103 Oranjelaan 8 2264 CW Leidschendam [email protected] MSc Presentation June 2, 2009 at 14:00 Lipkenszaal (LB 01.150), Faculty EWI, Mekelweg 4, Delft Committee Chair: Prof. Dr. Ir. H.J. Sips [email protected] Member: Dr. Ir. D.H.J. Epema [email protected] Member: Ir. N.W. Roest [email protected] Supervisor: Dr. K. van der Meer [email protected] Idoro B.V. Zonnebloem 52 2317 LM Leiden The Netherlands Parallel and Distributed Systems group Department of Software Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft University of Technology Mekelweg 4 2826 CD Delft The Netherlands www.ewi.tudelft.nl Sponsors: This master's thesis was typeset with MiKTEX 2.7, edited on TEXnicCenter 1 beta 7.50. Illustrations and diagrams were created using Microsoft Visio 2003 and Corel Paint Shop Pro 12.0. All running on an Acer Aspire 6930. Copyright c 2009 by Michael Henrichs, Idoro B.V. Cover photo and design by Michael Henrichs c 2009 http:nnphoto.lemantle.com All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior permission of the author. -
Chaudron: Extending Dbpedia with Measurement Julien Subercaze
Chaudron: Extending DBpedia with measurement Julien Subercaze To cite this version: Julien Subercaze. Chaudron: Extending DBpedia with measurement. 14th European Semantic Web Conference, Eva Blomqvist, Diana Maynard, Aldo Gangemi, May 2017, Portoroz, Slovenia. hal- 01477214 HAL Id: hal-01477214 https://hal.archives-ouvertes.fr/hal-01477214 Submitted on 27 Feb 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chaudron: Extending DBpedia with measurement Julien Subercaze1 Univ Lyon, UJM-Saint-Etienne, CNRS Laboratoire Hubert Curien UMR 5516, F-42023, SAINT-ETIENNE, France [email protected] Abstract. Wikipedia is the largest collaborative encyclopedia and is used as the source for DBpedia, a central dataset of the LOD cloud. Wikipedia contains numerous numerical measures on the entities it describes, as per the general character of the data it encompasses. The DBpedia In- formation Extraction Framework transforms semi-structured data from Wikipedia into structured RDF. However this extraction framework of- fers a limited support to handle measurement in Wikipedia. In this paper, we describe the automated process that enables the creation of the Chaudron dataset. We propose an alternative extraction to the tra- ditional mapping creation from Wikipedia dump, by also using the ren- dered HTML to avoid the template transclusion issue. -
Semantically Enhanced and Minimally Supervised Models for Ontology Construction, Text Classification, and Document Recommendation Alkhatib, Wael (2020)
Semantically Enhanced and Minimally Supervised Models for Ontology Construction, Text Classification, and Document Recommendation Alkhatib, Wael (2020) DOI (TUprints): https://doi.org/10.25534/tuprints-00011890 Lizenz: CC-BY-ND 4.0 International - Creative Commons, Namensnennung, keine Bearbei- tung Publikationstyp: Dissertation Fachbereich: 18 Fachbereich Elektrotechnik und Informationstechnik Quelle des Originals: https://tuprints.ulb.tu-darmstadt.de/11890 SEMANTICALLY ENHANCED AND MINIMALLY SUPERVISED MODELS for Ontology Construction, Text Classification, and Document Recommendation Dem Fachbereich Elektrotechnik und Informationstechnik der Technischen Universität Darmstadt zur Erlangung des akademischen Grades eines Doktor-Ingenieurs (Dr.-Ing.) genehmigte Dissertation von wael alkhatib, m.sc. Geboren am 15. October 1988 in Hama, Syrien Vorsitz: Prof. Dr.-Ing. Jutta Hanson Referent: Prof. Dr.-Ing. Ralf Steinmetz Korreferent: Prof. Dr.-Ing. Steffen Staab Tag der Einreichung: 28. January 2020 Tag der Disputation: 10. June 2020 Hochschulkennziffer D17 Darmstadt 2020 This document is provided by tuprints, e-publishing service of the Technical Univer- sity Darmstadt. http://tuprints.ulb.tu-darmstadt.de [email protected] Please cite this document as: URN:nbn:de:tuda-tuprints-118909 URL:https://tuprints.ulb.tu-darmstadt.de/id/eprint/11890 This publication is licensed under the following Creative Commons License: Attribution-No Derivatives 4.0 International https://creativecommons.org/licenses/by-nd/4.0/deed.en Wael Alkhatib, M.Sc.: Semantically Enhanced and Minimally Supervised Models, for Ontology Construction, Text Classification, and Document Recommendation © 28. January 2020 supervisors: Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr.-Ing. Steffen Staab location: Darmstadt time frame: 28. January 2020 ABSTRACT The proliferation of deliverable knowledge on the web, along with the rapidly in- creasing number of accessible research publications, make researchers, students, and educators overwhelmed. -
Download Special Issue
Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernandez Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernández Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Copyright © 2018 Hindawi. All rights reserved. This is a special issue published in “Scientific Programming.” All articles are open access articles distributed under the Creative Com- mons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board M. E. Acacio Sanchez, Spain Christoph Kessler, Sweden Danilo Pianini, Italy Marco Aldinucci, Italy Harald Köstler, Germany Fabrizio Riguzzi, Italy Davide Ancona, Italy José E. Labra, Spain Michele Risi, Italy Ferruccio Damiani, Italy Thomas Leich, Germany Damian Rouson, USA Sergio Di Martino, Italy Piotr Luszczek, USA Giuseppe Scanniello, Italy Basilio B. Fraguela, Spain Tomàs Margalef, Spain Ahmet Soylu, Norway Carmine Gravino, Italy Cristian Mateos, Argentina Emiliano Tramontana, Italy Gianluigi Greco, Italy Roberto Natella, Italy Autilia Vitiello, Italy Bormin Huang, USA Francisco Ortin, Spain Jan Weglarz, Poland Chin-Yu Huang, Taiwan Can Özturan, Turkey -
Ontology Learning and Its Application to Automated Terminology Translation
Natural Language Processing Ontology Learning and Its Application to Automated Terminology Translation Roberto Navigli and Paola Velardi, Università di Roma La Sapienza Aldo Gangemi, Institute of Cognitive Sciences and Technology lthough the IT community widely acknowledges the usefulness of domain ontolo- A gies, especially in relation to the Semantic Web,1,2 we must overcome several bar- The OntoLearn system riers before they become practical and useful tools. Thus far, only a few specific research for automated ontology environments have ontologies. (The “What Is an Ontology?” sidebar on page 24 provides learning extracts a definition and some background.) Many in the and is part of a more general ontology engineering computational-linguistics research community use architecture.4,5 Here, we describe the system and an relevant domain terms WordNet,3 but large-scale IT applications based on experiment in which we used a machine-learned it require heavy customization. tourism ontology to automatically translate multi- from a corpus of text, Thus, a critical issue is ontology construction— word terms from English to Italian. The method can identifying, defining, and entering concept defini- apply to other domains without manual adaptation. relates them to tions. In large, complex application domains, this task can be lengthy, costly, and controversial, because peo- OntoLearn architecture appropriate concepts in ple can have different points of view about the same Figure 1 shows the elements of the architecture. concept. Two main approaches aid large-scale ontol- Using the Ariosto language processor,6 OntoLearn a general-purpose ogy construction. The first one facilitates manual extracts terminology from a corpus of domain text, ontology engineering by providing natural language such as specialized Web sites and warehouses or doc- ontology, and detects processing tools, including editors, consistency uments exchanged among members of a virtual com- checkers, mediators to support shared decisions, and munity. -
Data Models for Home Services
__________________________________________PROCEEDING OF THE 13TH CONFERENCE OF FRUCT ASSOCIATION Data Models for Home Services Vadym Kramar, Markku Korhonen, Yury Sergeev Oulu University of Applied Sciences, School of Engineering Raahe, Finland {vadym.kramar, markku.korhonen, yury.sergeev}@oamk.fi Abstract An ultimate penetration of communication technologies allowing web access has enriched a conception of smart homes with new paradigms of home services. Modern home services range far beyond such notions as Home Automation or Use of Internet. The services expose their ubiquitous nature by being integrated into smart environments, and provisioned through a variety of end-user devices. Computational intelligence require a use of knowledge technologies, and within a given domain, such requirement as a compliance with modern web architecture is essential. This is where Semantic Web technologies excel. A given work presents an overview of important terms, vocabularies, and data models that may be utilised in data and knowledge engineering with respect to home services. Index Terms: Context, Data engineering, Data models, Knowledge engineering, Semantic Web, Smart homes, Ubiquitous computing. I. INTRODUCTION In recent years, a use of Semantic Web technologies to build a giant information space has shown certain benefits. Rapid development of Web 3.0 and a use of its principle in web applications is the best evidence of such benefits. A traditional database design in still and will be widely used in web applications. One of the most important reason for that is a vast number of databases developed over years and used in a variety of applications varying from simple web services to enterprise portals. In accordance to Forrester Research though a growing number of document, or knowledge bases, such as NoSQL is not a hype anymore [1].