Ontology Learning and Its Application to Automated Terminology Translation

Total Page:16

File Type:pdf, Size:1020Kb

Ontology Learning and Its Application to Automated Terminology Translation Natural Language Processing Ontology Learning and Its Application to Automated Terminology Translation Roberto Navigli and Paola Velardi, Università di Roma La Sapienza Aldo Gangemi, Institute of Cognitive Sciences and Technology lthough the IT community widely acknowledges the usefulness of domain ontolo- A gies, especially in relation to the Semantic Web,1,2 we must overcome several bar- The OntoLearn system riers before they become practical and useful tools. Thus far, only a few specific research for automated ontology environments have ontologies. (The “What Is an Ontology?” sidebar on page 24 provides learning extracts a definition and some background.) Many in the and is part of a more general ontology engineering computational-linguistics research community use architecture.4,5 Here, we describe the system and an relevant domain terms WordNet,3 but large-scale IT applications based on experiment in which we used a machine-learned it require heavy customization. tourism ontology to automatically translate multi- from a corpus of text, Thus, a critical issue is ontology construction— word terms from English to Italian. The method can identifying, defining, and entering concept defini- apply to other domains without manual adaptation. relates them to tions. In large, complex application domains, this task can be lengthy, costly, and controversial, because peo- OntoLearn architecture appropriate concepts in ple can have different points of view about the same Figure 1 shows the elements of the architecture. concept. Two main approaches aid large-scale ontol- Using the Ariosto language processor,6 OntoLearn a general-purpose ogy construction. The first one facilitates manual extracts terminology from a corpus of domain text, ontology engineering by providing natural language such as specialized Web sites and warehouses or doc- ontology, and detects processing tools, including editors, consistency uments exchanged among members of a virtual com- checkers, mediators to support shared decisions, and munity. It then filters the terminology using natural taxonomic and other ontology import tools. The second approach relies on language processing and statistical techniques that per- machine learning and automated language-processing form comparative analysis across different domains, semantic relations techniques to extract concepts and ontological rela- or contrastive corpora. Next, we use the WordNet and tions from structured and unstructured data such as SemCor3 lexical knowledge bases to semantically inter- among the concepts. databases and text. Few systems exploit both pret the terms. We then relate concepts according to approaches. The first approach predominates in most taxonomic (kind-of) and other semantic relations, gen- The authors used it to development toolsets such as Kaon, Protégé, Chi- erating a domain concept forest. We use WordNet and maera, and WebOnto, but some systems also imple- our rule-based inductive-learning method to extract automatically translate ment machine learning techniques.1 such relations. Our OntoLearn system is an infrastructure for auto- Finally, OntoLearn integrates the domain concept multiword terms from mated ontology learning from domain text. It is the forest with WordNet to create a pruned and specialized only system, as far as we know, that uses natural lan- view of the domain ontology. We then use ontology English to Italian. guage processing and machine learning techniques, editing, validation, and management tools4,5 that are 22 1094-7167/03/$17.00 © 2003 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society part of the more general architecture to enrich, correct, and update the generated ontology. The main novel aspect of our machine Domain Extraction of candidate learning method is semantic interpretation. corpus terminology Ariosto language We identify the right senses (concepts) for processor complex domain term components and the semantic relations between them. For exam- Filtering of domain Contrastive terminology ple, the result of this process on the compound corpora transport company selects the sense “company- enterprise” for company (as opposed to the Semantic term interpretation “social gathering” or “crew” senses) and the WordNet sense “commercial enterprise” for transport and (as opposed to the “exchange of molecules,” SemCor Identification of taxonomic “overwhelming emotion,” and “transport of and similarity relations magnetic tape” senses), and associates a pur- pose relation (a company for transporting goods and people) between the two concepts. Machine- Identification of other We know of nothing similar in the ontol- learned semantic relations ogy learning literature. Many described rule base methods first extract domain terms using var- ious statistical methods, then detect the tax- Ontology integration onomic and other types of relations between terms. The literature1, 7-9 uses the notion of domain term and domain concept inter- Domain concept forest changeably, but no semantic interpretation of terms actually takes place. For example, Domain the concept digital printing technology is ontology considered a kind of printing technology by virtue of simple string inclusion.7 However, printing has four senses in WordNet and technology has two. The WordNet lexical Figure 1. The OntoLearn architecture. ontology thus contains eight possible con- cept combinations for printing technology. Identifying the correct sense combinations usually captured with shallow techniques that domain relevance of a term t in class Dk is is important in determining the taxonomic range from stochastic methods to more computed as relations between a new domain concept and sophisticated syntactic approaches. an existing ontology. Furthermore, semantic Obviously, richer syntactic information PtD() DR = k , interpretation is relevant in view of document positively influences the quality of the result tk, n semantic indexing and retrieval. For exam- to be input to statistical filtering. We use ∑PtD()j ple, a query for hotel facility could retrieve Ariosto to parse documents in the applica- j =1 documents including concepts that are tion domain to extract a list Tc of syntacti- related by a kinship relation, such as swim- cally plausible terminological candidates. where P(t |Dk) is estimated by ming pool or conference room. Semantic Examples include compounds (credit card), interpretation is also useful for automatic adjective-nouns (public transport), and base f = tk, , translation, as we later demonstrate. Know- noun phrases (board of directors). EPtD( ()k ) ing the right senses of the complex term com- High frequency in a corpus is a property ∑ ftk′, ′∈ ponents in the source language—given a observable for terminological as well as non- tDk bilingual dictionary—greatly simplifies the terminological expressions, such as last problem of selecting the correct translation week. We measure the specificity of a ter- where ft,k is the frequency of term t in the for each component in the target language. minology candidate with respect to the tar- domain Dk. OntoLearn has three main phases: termi- get domain via comparative analysis across A second filter operates on the principle nology extraction, semantic interpretation, and different domains. To this end, we define a that a term cannot be a clue for a domain Dk creation of a specialized view of WordNet. specific domain relevance score. A quanti- unless it appears in several documents; there tative definition of the DR can be given must be some consensus on using that term Phase 1: Terminology extraction according to the amount of information cap- in the domain Dk. The domain consensus of Terminology can be considered the sur- tured in the target corpus relative to the term t in class Dk captures those terms that face appearance of relevant domain concepts. entire collection of corpora. More precisely, appear frequently across a given domain’s Candidate terminological expressions are given a set of n domains {D1,… ,Dn}, the documents. DC is defined as JANUARY/FEBRUARY 2003 computer.org/intelligent 23 Natural Language Processing What Is an Ontology? A domain ontology seeks to reduce or eliminate concep- tual and terminological confusion among the members of Foundational ontology a user community who need to share various kinds of electronic documents and information. It does so by iden- tifying and properly defining a set of relevant concepts Core ontology that characterize a given application domain, say, for travel agents or medical practitioners. An ontology speci- fies a shared understanding of a domain. It contains a set of generic concepts (such as “object,” “process,” “accom- Specific modation,” and “single room”), together with their defi- domain nitions and interrelationships. The construction of its uni- ontology fying conceptual framework fosters communication and cooperation among people, better enterprise organiza- tion, and system interoperability. It also provides such sys- tem-engineering benefits as reusability, reliability, and Figure A. The three levels of generality in a domain ontology. specification. Ontologies can have different degrees of formality, but they top ontology. The result, the core ontology, usually includes a must include metadata such as concepts, relations, axioms, few hundred application-domain concepts. Although many instances, or terms that lexicalize concepts. From the termi- projects eventually succeed in defining a core domain ontol- nological viewpoint, an ontology can even be seen
Recommended publications
  • A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index
    University of California Santa Barbara A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index A Thesis submitted in partial satisfaction of the requirements for the degree Master of Arts in Geography by Bo Yan Committee in charge: Professor Krzysztof Janowicz, Chair Professor Werner Kuhn Professor Emerita Helen Couclelis June 2016 The Thesis of Bo Yan is approved. Professor Werner Kuhn Professor Emerita Helen Couclelis Professor Krzysztof Janowicz, Committee Chair May 2016 A Data-Driven Framework for Assisting Geo-Ontology Engineering Using a Discrepancy Index Copyright c 2016 by Bo Yan iii Acknowledgements I would like to thank the members of my committee for their guidance and patience in the face of obstacles over the course of my research. I would like to thank my advisor, Krzysztof Janowicz, for his invaluable input on my work. Without his help and encour- agement, I would not have been able to find the light at the end of the tunnel during the last stage of the work. Because he provided insight that helped me think out of the box. There is no better advisor. I would like to thank Yingjie Hu who has offered me numer- ous feedback, suggestions and inspirations on my thesis topic. I would like to thank all my other intelligent colleagues in the STKO lab and the Geography Department { those who have moved on and started anew, those who are still in the quagmire, and those who have just begun { for their support and friendship. Last, but most importantly, I would like to thank my parents for their unconditional love.
    [Show full text]
  • Semi-Automated Ontology Based Question Answering System Open Access
    Journal of Advanced Research in Computing and Applications 17, Issue 1 (2019) 1-5 Journal of Advanced Research in Computing and Applications Journal homepage: www.akademiabaru.com/arca.html ISSN: 2462-1927 Open Semi-automated Ontology based Question Answering System Access 1, Khairul Nurmazianna Ismail 1 Faculty of Computer Science, Universiti Teknologi MARA Melaka, Malaysia ABSTRACT Question answering system enable users to retrieve exact answer for questions submit using natural language. The demand of this system increases since it able to deliver precise answer instead of list of links. This study proposes ontology-based question answering system. Research consist explanation of question answering system architecture. This question answering system used semi-automatic ontology development (Ontology Learning) approach to develop its ontology. Keywords: Question answering system; knowledge Copyright © 2019 PENERBIT AKADEMIA BARU - All rights reserved base; ontology; online learning 1. Introduction In the era of WWW, people need to search and get information fast online or offline which make people rely on Question Answering System (QAS). One of the common and traditionally use QAS is Frequently Ask Question site which contains common and straight forward answer. Currently, QAS have emerge as powerful platform using various techniques such as information retrieval, Knowledge Base, Natural Language Processing and Hybrid Based which enable user to retrieve exact answer for questions posed in natural language using either pre-structured database or a collection of natural language documents [1,4]. The demand of QAS increases day by day since it delivers short, precise and question-specific answer [10]. QAS with using knowledge base paradigm are better in Restricted- domain QA system since it ability to focus [12].
    [Show full text]
  • A Comparison of Knowledge Extraction Tools for the Semantic Web
    A Comparison of Knowledge Extraction Tools for the Semantic Web Aldo Gangemi1;2 1 LIPN, Universit´eParis13-CNRS-SorbonneCit´e,France 2 STLab, ISTC-CNR, Rome, Italy. Abstract. In the last years, basic NLP tasks: NER, WSD, relation ex- traction, etc. have been configured for Semantic Web tasks including on- tology learning, linked data population, entity resolution, NL querying to linked data, etc. Some assessment of the state of art of existing Knowl- edge Extraction (KE) tools when applied to the Semantic Web is then desirable. In this paper we describe a landscape analysis of several tools, either conceived specifically for KE on the Semantic Web, or adaptable to it, or even acting as aggregators of extracted data from other tools. Our aim is to assess the currently available capabilities against a rich palette of ontology design constructs, focusing specifically on the actual semantic reusability of KE output. 1 Introduction We present a landscape analysis of the current tools for Knowledge Extraction from text (KE), when applied on the Semantic Web (SW). Knowledge Extraction from text has become a key semantic technology, and has become key to the Semantic Web as well (see. e.g. [31]). Indeed, interest in ontology learning is not new (see e.g. [23], which dates back to 2001, and [10]), and an advanced tool like Text2Onto [11] was set up already in 2005. However, interest in KE was initially limited in the SW community, which preferred to concentrate on manual design of ontologies as a seal of quality. Things started changing after the linked data bootstrapping provided by DB- pedia [22], and the consequent need for substantial population of knowledge bases, schema induction from data, natural language access to structured data, and in general all applications that make joint exploitation of structured and unstructured content.
    [Show full text]
  • Arxiv:1809.06223V1 [Cs.CL] 17 Sep 2018
    Unsupervised Sense-Aware Hypernymy Extraction Dmitry Ustalov†, Alexander Panchenko‡, Chris Biemann‡, and Simone Paolo Ponzetto† †University of Mannheim, Germany fdmitry,[email protected] ‡University of Hamburg, Germany fpanchenko,[email protected] Abstract from text between two ambiguous words, e.g., apple fruit. However, by definition in Cruse In this paper, we show how unsupervised (1986), hypernymy is a binary relationship between sense representations can be used to im- senses, e.g., apple2 fruit1, where apple2 is the prove hypernymy extraction. We present “food” sense of the word “apple”. In turn, the word a method for extracting disambiguated hy- “apple” can be represented by multiple lexical units, pernymy relationships that propagate hy- e.g., “apple” or “pomiculture”. This sense is dis- pernyms to sets of synonyms (synsets), tinct from the “company” sense of the word “ap- constructs embeddings for these sets, and ple”, which can be denoted as apple3. Thus, more establishes sense-aware relationships be- generally, hypernymy is a relation defined on two tween matching synsets. Evaluation on two sets of disambiguated words; this modeling princi- gold standard datasets for English and Rus- ple was also implemented in WordNet (Fellbaum, sian shows that the method successfully 1998), where hypernymy relations link not words recognizes hypernymy relationships that directly, but instead synsets. This essential prop- cannot be found with standard Hearst pat- erty of hypernymy is however not used or modeled terns and Wiktionary datasets for the re- in the majority of current hypernymy extraction spective languages. approaches. In this paper, we present an approach that addresses this shortcoming.
    [Show full text]
  • A Combined Method for E-Learning Ontology Population Based on NLP and User Activity Analysis
    A Combined Method for E-Learning Ontology Population based on NLP and User Activity Analysis Dmitry Mouromtsev, Fedor Kozlov, Liubov Kovriguina and Olga Parkhimovich ITMO University, St. Petersburg, Russia [email protected], [email protected], [email protected], [email protected] Abstract. The paper describes a combined approach to maintaining an E-Learning ontology in dynamic and changing educational environment. The developed NLP algorithm based on morpho-syntactic patterns is ap- plied for terminology extraction from course tasks that allows to interlink extracted terms with the instances of the system’s ontology whenever some educational materials are changed. These links are used to gather statistics, evaluate quality of lectures’ and tasks’ materials, analyse stu- dents’ answers to the tasks and detect difficult terminology of the course in general (for the teachers) and its understandability in particular (for every student). Keywords: Semantic Web, Linked Learning, terminology extraction, education, educational ontology population 1 Introduction Nowadays reusing online educational resources becomes one of the most promis- ing approaches for e-learning systems development. A good example of using semantics to make education materials reusable and flexible is the SlideWiki system[1]. The key feature of an ontology-based e-learning system is the possi- bility for tutors and students to treat elements of educational content as named objects and named relations between them. These names are understandable both for humans as titles and for the system as types of data. Thus educational materials in the e-learning system thoroughly reflect the structure of education process via relations between courses, modules, lectures, tests and terms.
    [Show full text]
  • Roberto Navigli Curriculum Vitæ
    Roberto Navigli Curriculum Vitæ Roma, 18 marzo 2021 Part I { General Information Full Name Roberto Navigli Citizenship Italian E-mail [email protected] Spoken languages Italian, English, French Sito Web http://wwwusers.di.uniroma1.it/∼navigli Part II { Education Type Year Institution Notes (Degree, Experience, etc.) Ph.D. 2007 Sapienza PhD in Computer Science, advisor: prof. Paola Velardi. Thesis: "Structural Semantic Interconnections: a Knowledge-Based WSD Algorithm, its Evaluation and Applications", Winner of the 2017 AI*IA \Marco Cadoli" national prize for the best PhD thesis in AI. University 2001 Sapienza Master degree in Computer Science, 110/110 cum laude. graduation Thesis: "An automatic algorithm for domain ontology learning", supervisor: prof. Paola Velardi. Part III { Appointments III(A) { Academic Appointments Start End Institution Position 09/2017 oggi Sapienza full professor, Dipartimento di Informatica, Universit`adi Roma \La Sapienza". 12/2010 08/2017 Sapienza associate professor, Dipartimento di Informatica, Universit`adi Roma \La Sapienza". 03/2007 12/2010 Sapienza assistant professor, Dipartimento di Informatica, Sapienza. 05/2003 03/2007 Sapienza research fellow, Dipartimento di Informatica, Sapienza. III(B) { Other Appointments Start End Institution Position 05/2000 04/2003 YH Reply software engineer S.p.A. Roma 1 III(C) { Research visits and stays Start End Institution Position 01/2015 01/2015 Center for Advanced Studies Invited Visiting Fellow (CAS), LMU, Germany 2010 2012 University of Wolverhampton Visiting
    [Show full text]
  • Information Extraction Using Natural Language Processing
    INFORMATION EXTRACTION USING NATURAL LANGUAGE PROCESSING Cvetana Krstev University of Belgrade, Faculty of Philology Information Retrieval and/vs. Natural Language Processing So close yet so far Outline of the talk ◦ Views on Information Retrieval (IR) and Natural Language Processing (NLP) ◦ IR and NLP in Serbia ◦ Language Resources (LT) in the core of NLP ◦ at University of Belgrade (4 representative resources) ◦ LR and NLP for Information Retrieval and Information Extraction (IE) ◦ at University of Belgrade (4 representative applications) Wikipedia ◦ Information retrieval ◦ Information retrieval (IR) is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on full-text or other content-based indexing. ◦ Natural Language Processing ◦ Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language understanding, enabling computers to derive meaning from human or natural language input; and others involve natural language generation. Experts ◦ Information Retrieval ◦ As an academic field of study, Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collection (usually stored on computers). ◦ C. D. Manning, P. Raghavan, H. Schutze, “Introduction to Information Retrieval”, Cambridge University Press, 2008 ◦ Natural Language Processing ◦ The term ‘Natural Language Processing’ (NLP) is normally used to describe the function of software or hardware components in computer system which analyze or synthesize spoken or written language.
    [Show full text]
  • A Terminology Extraction System for Technology Related Terms
    TEST: A Terminology Extraction System for Technology Related Terms Murhaf Hossari Soumyabrata Dev John D. Kelleher ADAPT Centre, Trinity College ADAPT Centre, Trinity College ADAPT Centre, Technological Dublin Dublin University Dublin Dublin, Ireland Dublin, Ireland Dublin, Ireland [email protected] [email protected] [email protected] ABSTRACT of applications [6], ranging from e-mail spam filtering, advertising Tracking developments in the highly dynamic data-technology and marketing industry [4, 7], and social media. Particularly, in landscape are vital to keeping up with novel technologies and tools, these realm of online articles and blog articles, the growth in the in the various areas of Artificial Intelligence (AI). However, It is amount of information is almost in an exponential nature. Therefore, difficult to keep track of all the relevant technology keywords. In it is important for the researchers to develop a system that can this paper, we propose a novel system that addresses this problem. automatically parse the millions of documents, and identify the key This tool is used to automatically detect the existence of new tech- technological terms. Such system will greatly reduce the manual nologies and tools in text, and extract terms used to describe these work, and save several man-hours. new technologies. The extracted new terms can be logged as new In this paper, we develop a machine-learning based solution, AI technologies as they are found on-the-fly in the web. It can be that is capable of the discovery and extraction of information re- subsequently classified into the relevant semantic labels and AIdo- lated to AI technologies.
    [Show full text]
  • Semantically Enhanced and Minimally Supervised Models for Ontology Construction, Text Classification, and Document Recommendation Alkhatib, Wael (2020)
    Semantically Enhanced and Minimally Supervised Models for Ontology Construction, Text Classification, and Document Recommendation Alkhatib, Wael (2020) DOI (TUprints): https://doi.org/10.25534/tuprints-00011890 Lizenz: CC-BY-ND 4.0 International - Creative Commons, Namensnennung, keine Bearbei- tung Publikationstyp: Dissertation Fachbereich: 18 Fachbereich Elektrotechnik und Informationstechnik Quelle des Originals: https://tuprints.ulb.tu-darmstadt.de/11890 SEMANTICALLY ENHANCED AND MINIMALLY SUPERVISED MODELS for Ontology Construction, Text Classification, and Document Recommendation Dem Fachbereich Elektrotechnik und Informationstechnik der Technischen Universität Darmstadt zur Erlangung des akademischen Grades eines Doktor-Ingenieurs (Dr.-Ing.) genehmigte Dissertation von wael alkhatib, m.sc. Geboren am 15. October 1988 in Hama, Syrien Vorsitz: Prof. Dr.-Ing. Jutta Hanson Referent: Prof. Dr.-Ing. Ralf Steinmetz Korreferent: Prof. Dr.-Ing. Steffen Staab Tag der Einreichung: 28. January 2020 Tag der Disputation: 10. June 2020 Hochschulkennziffer D17 Darmstadt 2020 This document is provided by tuprints, e-publishing service of the Technical Univer- sity Darmstadt. http://tuprints.ulb.tu-darmstadt.de [email protected] Please cite this document as: URN:nbn:de:tuda-tuprints-118909 URL:https://tuprints.ulb.tu-darmstadt.de/id/eprint/11890 This publication is licensed under the following Creative Commons License: Attribution-No Derivatives 4.0 International https://creativecommons.org/licenses/by-nd/4.0/deed.en Wael Alkhatib, M.Sc.: Semantically Enhanced and Minimally Supervised Models, for Ontology Construction, Text Classification, and Document Recommendation © 28. January 2020 supervisors: Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr.-Ing. Steffen Staab location: Darmstadt time frame: 28. January 2020 ABSTRACT The proliferation of deliverable knowledge on the web, along with the rapidly in- creasing number of accessible research publications, make researchers, students, and educators overwhelmed.
    [Show full text]
  • Download Special Issue
    Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernandez Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernández Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Copyright © 2018 Hindawi. All rights reserved. This is a special issue published in “Scientific Programming.” All articles are open access articles distributed under the Creative Com- mons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board M. E. Acacio Sanchez, Spain Christoph Kessler, Sweden Danilo Pianini, Italy Marco Aldinucci, Italy Harald Köstler, Germany Fabrizio Riguzzi, Italy Davide Ancona, Italy José E. Labra, Spain Michele Risi, Italy Ferruccio Damiani, Italy Thomas Leich, Germany Damian Rouson, USA Sergio Di Martino, Italy Piotr Luszczek, USA Giuseppe Scanniello, Italy Basilio B. Fraguela, Spain Tomàs Margalef, Spain Ahmet Soylu, Norway Carmine Gravino, Italy Cristian Mateos, Argentina Emiliano Tramontana, Italy Gianluigi Greco, Italy Roberto Natella, Italy Autilia Vitiello, Italy Bormin Huang, USA Francisco Ortin, Spain Jan Weglarz, Poland Chin-Yu Huang, Taiwan Can Özturan, Turkey
    [Show full text]
  • Semi-Automatic Terminology Ontology Learning Based on Topic Modeling
    The paper has been accepted at Engineering Applications of Artificial Intelligence on 9 May 2017. This paper can cite as: Monika Rani, Amit Kumar Dhar, O.P. Vyas, Semi-automatic terminology ontology learning based on topic modeling, Engineering Applications of Artificial Intelligence, Volume 63, August 2017, Pages 108-125,ISSN 0952- 1976,https://doi.org/10.1016/j.engappai.2017.05.006. (http://www.sciencedirect.com/science/article/pii/S0952197617300891. Semi-Automatic Terminology Ontology Learning Based on Topic Modeling Monika Rani*, Amit Kumar Dhar and O. P. Vyas Department of Information Technology, Indian Institute of Information Technology, Allahabad, India [email protected] Abstract- Ontologies provide features like a common vocabulary, reusability, machine-readable content, and also allows for semantic search, facilitate agent interaction and ordering & structuring of knowledge for the Semantic Web (Web 3.0) application. However, the challenge in ontology engineering is automatic learning, i.e., the there is still a lack of fully automatic approach from a text corpus or dataset of various topics to form ontology using machine learning techniques. In this paper, two topic modeling algorithms are explored, namely LSI & SVD and Mr.LDA for learning topic ontology. The objective is to determine the statistical relationship between document and terms to build a topic ontology and ontology graph with minimum human intervention. Experimental analysis on building a topic ontology and semantic retrieving corresponding topic ontology for the user‟s query demonstrating the effectiveness of the proposed approach. Keywords: Ontology Learning (OL), Latent Semantic Indexing (LSI), Singular Value Decomposition (SVD), Probabilistic Latent Semantic Indexing (pLSI), MapReduce Latent Dirichlet Allocation (Mr.LDA), Correlation Topic Modeling (CTM).
    [Show full text]
  • A Comparative Analysis for English and Ukrainian Texts Processing Based on Semantics and Syntax Approach
    A Comparative Analysis for English and Ukrainian Texts Processing Based on Semantics and Syntax Approach Victoria Vysotska, Svitlana Holoshchuk and Roman Holoshchuk Lviv Polytechnic National University, S. Bandera Street, 12, Lviv, 79013, Ukraine Abstract The article deals with the application of generating grammars in linguistic modelling. Analysis of sentence syntax modelling is used to automate studying and synthesising natural language texts. The complex research of automatic processing of English and Ukrainian texts considering the semantics and syntax of natural languages is conducted. A comparative linguistic approach to the syntax and semantics of English and Ukrainian languages synthesises the corpora of natural language texts. The automatic operation of relevant text data is carried out. Comparative analysis of these languages facilitates the development of the linguistic algorithm for converting natural language texts of Germanic and Slavic type. Keywords 1 Natural language processing, word order, Ukrainian language, NLP, English language, noun group, verb group, direct word order, grammatical category, base form, structural scheme, information resource processing, natural language text, terminal chain identification, terminal chain production, keywords identification, blocked words, rubrics 1. Introduction Each language is unique in its structure and characteristics of the linguistic unit’s construction for delivering meaningful texts. For automatic processing of coherent data texts, developing the algorithms for analysis and
    [Show full text]