Building Specialized Bilingual Lexicons Using Word Sense Disambiguation

Total Page:16

File Type:pdf, Size:1020Kb

Building Specialized Bilingual Lexicons Using Word Sense Disambiguation Building Specialized Bilingual Lexicons Using Word Sense Disambiguation Dhouha Bouamor Nasredine Semmar Pierre Zweigenbaum CEA, LIST, Vision and CEA, LIST, Vision and Content LIMSI-CNRS, Content Engineering Laboratory, Engineering Laboratory, F-91403 Orsay CEDEX 91191 Gif-sur-Yvette CEDEX 91191 Gif-sur-Yvette France France CEDEX France [email protected] [email protected] [email protected] Abstract polysemous. For instance, the French word action This paper presents an extension of the can be translated into English as share, stock, law- standard approach used for bilingual lex- suit or deed. In such cases, it is difficult to iden- icon extraction from comparable corpora. tify in flat resources like bilingual dictionaries, We study the ambiguity problem revealed wherein entries are usually unweighted and un- by the seed bilingual dictionary used to ordered, which translations are most relevant. The translate context vectors and augment the standard approach considers all available trans- standard approach by a Word Sense Dis- lations and gives them the same importance in ambiguation process. Our aim is to iden- the resulting translated context vectors indepen- tify the translations of words that are more dently of the domain of interest and word ambigu- likely to give the best representation of ity. Thus, in the financial domain, translating ac- words in the target language. On two spe- tion into deed or lawsuit would probably introduce cialized French-English and Romanian- noise in context vectors. English comparable corpora, empirical ex- In this paper, we present a novel approach perimental results show that the proposed which addresses the word ambiguity problem ne- method consistently outperforms the stan- glected in the standard approach. We introduce a dard approach. use of a WordNet-based semantic similarity mea- sure permitting the disambiguation of translated 1 Introduction context vectors. The basic intuition behind this Over the years, bilingual lexicon extraction from method is that instead of taking all translations comparable corpora has attracted a wealth of re- of each seed word to translate a context vector, search works (Fung, 1998; Rapp, 1995; Chiao and we only use the translations that are more likely Zweigenbaum, 2003). The main work in this re- to give the best representation of the context vec- search area could be seen as an extension of Har- tor in the target language. We test the method ris’s distributional hypothesis (Harris, 1954). It is on two comparable corpora specialized on the based on the simple observation that a word and Breast Cancer domain, for the French-English and its translation are likely to appear in similar con- Romanian-English pair of languages. This choice texts across languages (Rapp, 1995). Based on allows us to study the behavior of the disambigua- this assumption, the alignment method, known as tion for a pair of languages that are richly repre- the standard approach builds and compares con- sented and for a pair that includes Romanian, a text vectors for each word of the source and target language that has fewer associated resources than languages. French and English. A particularity of this approach is that, to enable 2 Related Work the comparison of context vectors, it requires the existence of a seed bilingual dictionary to translate Recent improvements of the standard approach are source context vectors. The use of the bilingual based on the assumption that the more the con- dictionary is problematic when a word has sev- text vectors are representative, the better the bilin- eral translations, whether they are synonymous or gual lexicon extraction is. Prochasson et al. (2009) 952 International Joint Conference on Natural Language Processing, pages 952–956, Nagoya, Japan, 14-18 October 2013. used transliterated words and scientific compound Once translated into the target language, the words as ‘anchor points’. Giving these words context vectors disambiguation process inter- higher priority when comparing target vectors im- venes. This process operates locally on each con- proved bilingual lexicon extraction. In addition to text vector and aims at finding the most promi- transliteration, Rubino and Linares` (2011) com- nent translations of polysemous words. For this bined the contextual representation within a the- purpose, we use monosemic words as a seed set matic one. The basic intuition of their work is that of disambiguated words to infer the polysemous a term and its translation share thematic similari- word’s translations senses. We hypothesize that a ties. Hazem and Morin (2012) recently proposed a word is monosemic if it is associated to only one method that filters the entries of the bilingual dic- entry in the bilingual dictionary. We checked this tionary based upon POS-tagging and domain rel- assumption by probing monosemic entries of the evance criteria, but no improvements was demon- bilingual dictionary against WordNet and found strated. that 95% of the entries are monosemic in both re- Gaussier et al. (2004) attempted to solve the sources. problem of different word ambiguities in the Formally, we derive a semantic similarity value source and target languages. They investigated a between all the translations provided for each pol- number of techniques including canonical corre- ysemous word by the bilingual dictionary and lation analysis and multilingual probabilistic la- all monosemic words appearing whithin the same tent semantic analysis. The best results, with a context vector. There is a relatively large number very small improvement were reported for a mixed of word-to-word similarity metrics that were pre- method. One important difference with Gaussier viously proposed in the literature, ranging from et al. (2004) is that they focus on words ambigu- path-length measures computed on semantic net- ities on source and target languages, whereas we works, to metrics based on models of distribu- consider that it is sufficient to disambiguate only tional similarity learned from large text collec- translated source context vectors. tions. For simplicity, we use in this work, the Wu and Palmer (1994) (WUP) path-length-based se- 3 Context Vector Disambiguation mantic similarity measure. It was demonstrated by (Lin, 1998) that this metric achieves good perfor- The approach we propose augments the standard mances among other measures. WUP computes a approach used for bilingual lexicons mining from score (equation 1) denoting how similar two word comparable corpora. As it was mentioned in sec- senses are, based on the depth of the two synsets tion 1, when the lexical extraction applies to a spe- (s1 and s2) in the WordNet taxonomy and that of cific domain, not all translations in the bilingual their Least Common Subsumer (LCS), i.e., the dictionary are relevant for the target context vec- most specific word that they share as an ancestor. tor representation. For this reason, we introduce 2 × depth(LCS) a WordNet-based WSD process that aims at im- W upSim(s1, s2) = (1) proving the adequacy of context vectors and there- depth(s1) + depth(s2) fore improve the results of the standard approach. In practice, since a word can belong to more A large number of WSD techniques were pre- than one synset in WordNet, we determine the viously proposed in the literature. The most popu- semantic similarity between two words w1 and lar ones are those that compute semantic similarity w2 as the maximum W upSim between the synset with the help of existing thesauri such as Word- or the synsets that include the synsets(w1) and Net (Fellbaum, 1998). This thesaurus has been synsets(w2) according to the following equation: applied to many tasks relying on word-based sim- ilarity, including document (Hwang et al., 2011) SemSim(w1, w2) = max{W upSim(s1, s2); and image (Cho et al., 2007; Choi et al., 2012) (s1, s2) ∈ synsets(w1) × synsets(w2)} (2) retrieval systems. In this work, we use this re- Then, to identify the most prominent translations source to derive a semantic similarity between lex- of each polysemous unit wp, an average similarity ical units within the same context vector. To the j best of our knowledge, this is the first application is computed for each translation wp of wp: of WordNet to the task of bilingual lexicon extrac- PN Sem (w , wj ) Ave Sim(wj ) = i=1 Sim i p (3) tion from comparable corpora. p N 953 Corpus French English The resulting bilingual dictionary contains about 396, 524 524, 805 136,681 entries for Romanian-English with an av- Corpus Romanian English erage of 1 translation per word. 22,539 322,507 4.1.3 Evaluation list Table 1: Comparable corpora sizes in term of In bilingual terminology extraction from compa- words. rable corpora, a reference list is required to eval- uate the performance of the alignment. Such lists are usually composed of about 100 sin- where N is the total number of monosemic words gle terms (Hazem and Morin, 2012; Chiao and j and SemSim is the similarity value of wp and the Zweigenbaum, 2002). Here, we created a refer- ith monosemic word. Hence, according to average ence list3 for each pair of language. The French- j relatedness values Ave Sim(wp), we obtain for English list contains 96 terms extracted from the 4 each polysemous word wp an ordered list of trans- French-English MESH and the UMLS thesauri . 1 n lations wp . wp . This allows us to select trans- The Romanian-English reference list was created lations of words which are more salient than the by a native speaker and contains 38 pair of words. others to represent the word to be translated. Note that reference terms pairs appear at least five times in each part of both comparable corpora. 4 Experiments and Results 4.2 Experimental setup 4.1 Resources Three other parameters need to be set up: (1) the 4.1.1 Comparable corpora window size, (2) the association measure and the We conducted our experiments on two French- (3) similarity measure.
Recommended publications
  • A Comparison of Knowledge Extraction Tools for the Semantic Web
    A Comparison of Knowledge Extraction Tools for the Semantic Web Aldo Gangemi1;2 1 LIPN, Universit´eParis13-CNRS-SorbonneCit´e,France 2 STLab, ISTC-CNR, Rome, Italy. Abstract. In the last years, basic NLP tasks: NER, WSD, relation ex- traction, etc. have been configured for Semantic Web tasks including on- tology learning, linked data population, entity resolution, NL querying to linked data, etc. Some assessment of the state of art of existing Knowl- edge Extraction (KE) tools when applied to the Semantic Web is then desirable. In this paper we describe a landscape analysis of several tools, either conceived specifically for KE on the Semantic Web, or adaptable to it, or even acting as aggregators of extracted data from other tools. Our aim is to assess the currently available capabilities against a rich palette of ontology design constructs, focusing specifically on the actual semantic reusability of KE output. 1 Introduction We present a landscape analysis of the current tools for Knowledge Extraction from text (KE), when applied on the Semantic Web (SW). Knowledge Extraction from text has become a key semantic technology, and has become key to the Semantic Web as well (see. e.g. [31]). Indeed, interest in ontology learning is not new (see e.g. [23], which dates back to 2001, and [10]), and an advanced tool like Text2Onto [11] was set up already in 2005. However, interest in KE was initially limited in the SW community, which preferred to concentrate on manual design of ontologies as a seal of quality. Things started changing after the linked data bootstrapping provided by DB- pedia [22], and the consequent need for substantial population of knowledge bases, schema induction from data, natural language access to structured data, and in general all applications that make joint exploitation of structured and unstructured content.
    [Show full text]
  • A Combined Method for E-Learning Ontology Population Based on NLP and User Activity Analysis
    A Combined Method for E-Learning Ontology Population based on NLP and User Activity Analysis Dmitry Mouromtsev, Fedor Kozlov, Liubov Kovriguina and Olga Parkhimovich ITMO University, St. Petersburg, Russia [email protected], [email protected], [email protected], [email protected] Abstract. The paper describes a combined approach to maintaining an E-Learning ontology in dynamic and changing educational environment. The developed NLP algorithm based on morpho-syntactic patterns is ap- plied for terminology extraction from course tasks that allows to interlink extracted terms with the instances of the system’s ontology whenever some educational materials are changed. These links are used to gather statistics, evaluate quality of lectures’ and tasks’ materials, analyse stu- dents’ answers to the tasks and detect difficult terminology of the course in general (for the teachers) and its understandability in particular (for every student). Keywords: Semantic Web, Linked Learning, terminology extraction, education, educational ontology population 1 Introduction Nowadays reusing online educational resources becomes one of the most promis- ing approaches for e-learning systems development. A good example of using semantics to make education materials reusable and flexible is the SlideWiki system[1]. The key feature of an ontology-based e-learning system is the possi- bility for tutors and students to treat elements of educational content as named objects and named relations between them. These names are understandable both for humans as titles and for the system as types of data. Thus educational materials in the e-learning system thoroughly reflect the structure of education process via relations between courses, modules, lectures, tests and terms.
    [Show full text]
  • Information Extraction Using Natural Language Processing
    INFORMATION EXTRACTION USING NATURAL LANGUAGE PROCESSING Cvetana Krstev University of Belgrade, Faculty of Philology Information Retrieval and/vs. Natural Language Processing So close yet so far Outline of the talk ◦ Views on Information Retrieval (IR) and Natural Language Processing (NLP) ◦ IR and NLP in Serbia ◦ Language Resources (LT) in the core of NLP ◦ at University of Belgrade (4 representative resources) ◦ LR and NLP for Information Retrieval and Information Extraction (IE) ◦ at University of Belgrade (4 representative applications) Wikipedia ◦ Information retrieval ◦ Information retrieval (IR) is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on full-text or other content-based indexing. ◦ Natural Language Processing ◦ Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language understanding, enabling computers to derive meaning from human or natural language input; and others involve natural language generation. Experts ◦ Information Retrieval ◦ As an academic field of study, Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collection (usually stored on computers). ◦ C. D. Manning, P. Raghavan, H. Schutze, “Introduction to Information Retrieval”, Cambridge University Press, 2008 ◦ Natural Language Processing ◦ The term ‘Natural Language Processing’ (NLP) is normally used to describe the function of software or hardware components in computer system which analyze or synthesize spoken or written language.
    [Show full text]
  • A Terminology Extraction System for Technology Related Terms
    TEST: A Terminology Extraction System for Technology Related Terms Murhaf Hossari Soumyabrata Dev John D. Kelleher ADAPT Centre, Trinity College ADAPT Centre, Trinity College ADAPT Centre, Technological Dublin Dublin University Dublin Dublin, Ireland Dublin, Ireland Dublin, Ireland [email protected] [email protected] [email protected] ABSTRACT of applications [6], ranging from e-mail spam filtering, advertising Tracking developments in the highly dynamic data-technology and marketing industry [4, 7], and social media. Particularly, in landscape are vital to keeping up with novel technologies and tools, these realm of online articles and blog articles, the growth in the in the various areas of Artificial Intelligence (AI). However, It is amount of information is almost in an exponential nature. Therefore, difficult to keep track of all the relevant technology keywords. In it is important for the researchers to develop a system that can this paper, we propose a novel system that addresses this problem. automatically parse the millions of documents, and identify the key This tool is used to automatically detect the existence of new tech- technological terms. Such system will greatly reduce the manual nologies and tools in text, and extract terms used to describe these work, and save several man-hours. new technologies. The extracted new terms can be logged as new In this paper, we develop a machine-learning based solution, AI technologies as they are found on-the-fly in the web. It can be that is capable of the discovery and extraction of information re- subsequently classified into the relevant semantic labels and AIdo- lated to AI technologies.
    [Show full text]
  • Ontology Learning and Its Application to Automated Terminology Translation
    Natural Language Processing Ontology Learning and Its Application to Automated Terminology Translation Roberto Navigli and Paola Velardi, Università di Roma La Sapienza Aldo Gangemi, Institute of Cognitive Sciences and Technology lthough the IT community widely acknowledges the usefulness of domain ontolo- A gies, especially in relation to the Semantic Web,1,2 we must overcome several bar- The OntoLearn system riers before they become practical and useful tools. Thus far, only a few specific research for automated ontology environments have ontologies. (The “What Is an Ontology?” sidebar on page 24 provides learning extracts a definition and some background.) Many in the and is part of a more general ontology engineering computational-linguistics research community use architecture.4,5 Here, we describe the system and an relevant domain terms WordNet,3 but large-scale IT applications based on experiment in which we used a machine-learned it require heavy customization. tourism ontology to automatically translate multi- from a corpus of text, Thus, a critical issue is ontology construction— word terms from English to Italian. The method can identifying, defining, and entering concept defini- apply to other domains without manual adaptation. relates them to tions. In large, complex application domains, this task can be lengthy, costly, and controversial, because peo- OntoLearn architecture appropriate concepts in ple can have different points of view about the same Figure 1 shows the elements of the architecture. concept. Two main approaches aid large-scale ontol- Using the Ariosto language processor,6 OntoLearn a general-purpose ogy construction. The first one facilitates manual extracts terminology from a corpus of domain text, ontology engineering by providing natural language such as specialized Web sites and warehouses or doc- ontology, and detects processing tools, including editors, consistency uments exchanged among members of a virtual com- checkers, mediators to support shared decisions, and munity.
    [Show full text]
  • Translate's Localization Guide
    Translate’s Localization Guide Release 0.9.0 Translate Jun 26, 2020 Contents 1 Localisation Guide 1 2 Glossary 191 3 Language Information 195 i ii CHAPTER 1 Localisation Guide The general aim of this document is not to replace other well written works but to draw them together. So for instance the section on projects contains information that should help you get started and point you to the documents that are often hard to find. The section of translation should provide a general enough overview of common mistakes and pitfalls. We have found the localisation community very fragmented and hope that through this document we can bring people together and unify information that is out there but in many many different places. The one section that we feel is unique is the guide to developers – they make assumptions about localisation without fully understanding the implications, we complain but honestly there is not one place that can help give a developer and overview of what is needed from them, we hope that the developer section goes a long way to solving that issue. 1.1 Purpose The purpose of this document is to provide one reference for localisers. You will find lots of information on localising and packaging on the web but not a single resource that can guide you. Most of the information is also domain specific ie it addresses KDE, Mozilla, etc. We hope that this is more general. This document also goes beyond the technical aspects of localisation which seems to be the domain of other lo- calisation documents.
    [Show full text]
  • Generating Domain Terminologies Using Root- and Rule-Based Terms1
    Generating Domain Terminologies using Root- and Rule-Based Terms1 Jacob Collard1, T. N. Bhat2, Eswaran Subrahmanian3,4, Ram D. Sriram3, John T. Elliot2, Ursula R. Kattner2, Carelyn E. Campbell2, Ira Monarch4,5 Independent Consultant, Ithaca, New York1, Materials Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD2, Information Technology Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 3, Carnegie Mellon University, Pittsburgh, PA4, Independent Consultant, Pittsburgh, PA5 Abstract Motivated by the need for flexible, intuitive, reusable, and normalized terminology for guiding search and building ontologies, we present a general approach for generating sets of such terminologies from natural language documents. The terms that this approach generates are root- and rule-based terms, generated by a series of rules designed to be flexible, to evolve, and, perhaps most important, to protect against ambiguity and standardize semantically similar but syntactically distinct phrases to a normal form. This approach combines several linguistic and computational methods that can be automated with the help of training sets to quickly and consistently extract normalized terms. We discuss how this can be extended as natural language technologies improve and how the strategy applies to common use-cases such as search, document entry and archiving, and identifying, tracking, and predicting scientific and technological trends. Keywords dependency parsing; natural language processing; ontology generation; search; terminology generation; unsupervised learning. 1. Introduction 1.1 Terminologies and Semantic Technologies Services and applications on the world-wide web, as well as standards defined by the World Wide Web Consortium (W3C), the primary standards organization for the web, have been integrating semantic technologies into the Internet since 2001 (Koivunen and Miller 2001).
    [Show full text]
  • Terminology Extraction, Translation Tools and Comparable Corpora
    Terminology Extraction, Translation Tools and Comparable Corpora: TTC concept, midterm progress and achieved results Tatiana Gornostay, Anita Gojun, Marion Weller, Ulrich Heid, Emmanuel Morin, Beatrice Daille, Helena Blancafort, Serge Sharoff, Claude Méchoulam To cite this version: Tatiana Gornostay, Anita Gojun, Marion Weller, Ulrich Heid, Emmanuel Morin, et al.. Terminology Extraction, Translation Tools and Comparable Corpora: TTC concept, midterm progress and achieved results. LREC 2012 Workshop on Creating Cross-language Resources for Disconnected Languages and Styles (CREDISLAS), May 2012, Istanbul, Turkey. 4 p. hal-00819909 HAL Id: hal-00819909 https://hal.archives-ouvertes.fr/hal-00819909 Submitted on 9 May 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Terminology Extraction, Translation Tools and Comparable Corpora: TTC concept, midterm progress and achieved results Tatiana Gornostaya, Anita Gojunb, Marion Wellerb, Ulrich Heidb, Emmanuel Morinc, Beatrice Daillec, Helena Blancafortd, Serge Sharoffe, Claude Méchoulamf Tildea, Institute
    [Show full text]
  • Tbxtools: a Free, Fast and Flexible Tool for Automatic Terminology Extraction
    TBXTools: A Free, Fast and Flexible Tool for Automatic Terminology Extraction Antoni Oliver Merce` Vazquez` Universitat Oberta de Catalunya Universitat Oberta de Catalunya [email protected] [email protected] Abstract assisted translation, thesaurus construction, classi- fication, indexing, information retrieval, and also The manual identification of terminology text mining and text summarisation (Heid and Mc- from specialized corpora is a complex task Naught, 1991; Frantzi and Ananiadou, 1996; Vu et that needs to be addressed by flexible al., 2008). tools, in order to facilitate the construction The automatic terminology extraction tools de- of multilingual terminologies which are veloped in recent years allow easier manual term the main resources for computer-assisted extraction from a specialized corpus, which is a translation tools, machine translation or long, tedious and repetitive task that has the risk ontologies. The automatic terminology of being unsystematic and subjective, very costly extraction tools developed so far either use in economic terms and limited by the current a proprietary code or an open source code, available information. However, existing tools that is limited to certain software func- should be improved in order to get more consistent tionalities. To automatically extract terms terminology and greater productivity (Gornostay, from specialized corpora for different pur- 2010). poses such as constructing dictionaries, In the last few years, several term extraction thesauruses or translation memories, we tools have been developed, but most of them are need open source tools to easily integrate language-dependent: French and English –Fastr new functionalities to improve term selec- (Jacquemin, 1999) and Acabit (Daille, 2003); tion. This paper presents TBXTools, a Portuguese –Extracterm (Costa et al., 2004) and free automatic terminology extraction tool ExATOlp (Lopes et al., 2009); Spanish-Basque that implements linguistic and statistical –Elexbi (Hernaiz et al., 2006); Spanish-German methods for multiword term extraction.
    [Show full text]
  • The Interplay Between Lexical Resources and Natural Language Processing
    Tutorial on: The interplay between lexical resources and Natural Language Processing Google Group: goo.gl/JEazYH 1 Luis Espinosa Anke Jose Camacho-Collados Mohammad Taher Pilehvar Google Group: goo.gl/JEazYH 2 Luis Espinosa Anke (linguist) Jose Camacho-Collados (mathematician) Mohammad Taher Pilehvar (computer scientist) 3 www.kahoot.it PIN: 7024700 NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 4 QUESTION 1 PIN: 7024700 www.kahoot.it NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 5 Outline 1. Introduction 2. Overview of Lexical Resources 3. NLP for Lexical Resources 4. Lexical Resources for NLP 5. Conclusion and Future Directions NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 6 INTRODUCTION NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 7 Introduction ● “A lexical resource (LR) is a database consisting of one or several dictionaries.” (en.wikipedia.org/wiki/Lexical_resource) ● “What is lexical resource? In a word it is vocabulary and it matters for IELTS writing because …” (dcielts.com/ielts-writing/lexical-resource) ● “The term Language Resource refers to a set of speech or language data and descriptions in machine readable form, used for … ” (elra.info/en/about/what-language-resource )
    [Show full text]
  • Building Ontologies from Folksonomies and Linked Data: Data Structures and Algorithms Technical Report - 22 May 2012
    Building ontologies from folksonomies and linked data: Data structures and Algorithms Technical Report - 22 May 2012 Andrés García-Silva1, Jael García-Castro2, Alexander García3, Oscar Corcho1, Asunción Gómez-Pérez1 1 Ontology Engineering Group, Facultad de Informática, Universidad Politécnica de Madrid, Spain {hgarcia,ocorcho,asun}@fi.upm.es, 2 E-Business & Web Science Research Group, Universitaet der Bundeswehr, Muenchen, Germany [email protected], 3 Biomedical Informatics, Medical Center, University of Arkansas, USA [email protected], Abstract. We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emer- gent knowledge of social tagging systems with formal knowledge from Ontologies. 1 Introduction In this report we present the formalization of the data structures and algorithms used in the approach for building ontologies from folksonomies and linked data. In this approach we use folksonomies to gather a domain terminology. First in a term extraction activity we represent folksonomies as a graph which is traversed by using a spreading activation algorithm (see section 2.1). Next in a semantic elicitation activity we identify classes and relations among terms on the terminology relying on linked data sets (see section 2.2). During this activity terms are associated with semantic resources in DBpedia by means of a semantic grounding algorithm. Once terms are grounded to semantic entities we attempt to identify which of those resources correspond to classes in the ontologies that we are using.
    [Show full text]
  • Bilingual Terminology Extraction in Sketch Engine
    Bilingual Terminology Extraction in Sketch Engine Vít Baisa1,2, Barbora Ulipová, Michal Cukr1,2 1 Natural Language Processing Centre, Faculty of Informatics, Masaryk University Botanická 68a, 602 00 Brno, Czech Republic [email protected] 2 Lexical Computing Brighton, United Kingdom and Brno, Czech Republic {vit.baisa,michal.cukr}@sketchengine.co.uk Abstract. We present a method of bilingual terminology extraction from parallel corpora and a few heuristics and experiments with improving the performance of the basic variant of the method. An evaluation is given using a small gold standard manually prepared for English- Czech language pair from DGT translation memory [1]. The bilingual terminology extraction (ABTE3) is available for several languages in Sketch Engine—the corpus management tool [2]. Keywords: terminology extraction, bilingual terminology extraction, Sketch Engine, logDice, parallel corpus 1 Introduction Parallel corpora are valuable resources for machine and computer-assisted translation. Here we explore a possibility of extracting bilingual terminology from parallel corpora combining a monolingual terminology extraction [3] and a co-occurrence statistics [4]. We describe the method and how it is incorporated in the corpus manager tool Sketch Engine. We experimented with parameter tuning and evaluated a few settings using a small gold standard for English- Czech language pair. The following section is a brief survey of topics, methods and tools in ABTE. In Section 3 we describe the basic algorithm for the extraction and in Section 4 how it is integrated in Sketch Engine. In Sections 5 and 6 we evaluate the algorithm and its variants. 2 Related work The monolingual terminology extraction is a well-studied field, and the topic of ABTE has been explored since 90s [5] but recent summarizing publication [6] 3 ATE stands for “automatic terminology extraction”, so we adopt the abbreviation here and add “B” for “bilingual”.
    [Show full text]