Probabilistic Semantic Frames, © April 2014 Supervisor: Doc

Total Page:16

File Type:pdf, Size:1020Kb

Probabilistic Semantic Frames, © April 2014 Supervisor: Doc PROBABILISTICSEMANTICFRAMES jiríˇ materna Ph.D. Thesis Natural Language Processing Centre Faculty of Informatics Masaryk University April 2014 JiˇríMaterna: Probabilistic Semantic Frames, © April 2014 supervisor: doc. PhDr. Karel Pala, CSc. ABSTRACT During recent decades, semantic frames, sometimes called semantic or thematic grids, have been becoming increasingly popular within the community of computational linguists. Semantic frames are con- ceptual structures capturing semantic roles valid for a set of lexical units, which can alternatively be seen as a formal representation of prototypical situations evoked by lexical units. Linguists use them for their ability to describe an interface between the syntax and the semantics. In practical natural language processing applications, they can be used, for instance, for the word sense disambiguation task, knowledge representation, or in order to resolve ambiguities in the syntactic analysis of natural languages. Nowadays, lexicons of semantic frames are mainly created manu- ally or semi-automatically by highly trained linguists. Manually cre- ated lexicons involve, for example, a well-known lexicon of semantic frames FrameNet, a lexicon of verb valencies known as VerbNet, or Czech valency lexicons Vallex and VerbaLex. These and other frame- based lexical resources have many promising applications, but suffer from several disadvantages. Most importantly, their creation requires manual work of trained linguists, which is very time-consuming and expensive. The coverage of the resources is then usually small or limited to specific domains. In order to avoid the problems, this thesis proposes a method of creating semantic frames automatically. The basic idea is to generate a set of semantic frames and roles by maximizing the posterior probability of a probabilistic model on a syntactically parsed training corpus. A semantic role in the model is represented as a probability distribution over all its realizations in the corpus, and a semantic frame as a tuple of semantic roles, each of them connected with some grammatical relation. For every lexical unit from the corpus, a probability distribution over all semantic frames is generated. The probability of a frame corresponds to the relative frequency of its usage in the corpus for a given lexical unit. The method of constructing semantic roles is similar to inferring latent topics in Latent Dirichlet Allocation, thus, the probabilistic semantic frames proposed in this work are called LDA-Frames. The most straightforward application is its use as a corpus-driven lexicon of semantic patterns that can be exploited by linguists for the language analysis, or by lexicographers for creating lexicons. The appropriateness of using LDA-Frames for these purposes is shown in the comparison with a manually created verb pattern lexicon from the Corpus Pattern Analysis project. The second application, iii which demonstrates the power of LDA-Frames in this thesis, is the measurement of a semantic similarity between lexical units. It will be shown that the thesaurus built using the information from the probabilistic semantic frames overcomes a competitive approach based on the Sketch Engine. This thesis is structured into two parts. The first part introduces basic concepts of lexical semantics and explores the state of the art in the field of semantic frames in computational linguistics. The second part is dedicated to describing all the necessary theoretical background, as well as the idea of LDA-Frames with its detailed description and evaluation. The basic model is provided along with several extensions, namely a non-parametric version and a parallelized computational model. The last sections of the thesis illustrate the usability of LDA-Frames in practical natural language processing applications. iv PUBLICATIONS Some ideas and figures from this thesis have previously appeared in the following publications: • JiˇríMaterna. Parameter Estimation for LDA-Frames. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT 2013), pages 482–486, Atlanta, Georgia, June 2013. Association for Computational Linguistics. • Jiˇrí Materna. LDA-Frames: An Unsupervised Approach to Generating Semantic Frames. In Alexander Gelbukh, editor, Proceedings of the 13th International Conference CICLing 2012, Part I, volume 7181 of Lecture Notes in Computer Science, pages 376– 387. Springer Berlin / Heidelberg, 2012. • JiˇríMaterna. Building a Thesaurus Using LDA-Frames. In Pavel Rychlý, Aleš Horák, editor, Recent Advances in Slavonic Natural Language Processing, pages 97–103. Tribun EU, 2012. • JiˇríMaterna and Karel Pala. Using Ontologies for Semi-automatic Linking VerbaLex with FrameNet. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta Malta, 2010. (Contribution: 80 %) • JiˇríMaterna. Linking VerbaLex vith FrameNet: Case Study for the Indicate Verb Class. In Petr Sojka, Aleš Horák, editor, Recent Advances in Slavonic Natural Language Processing, pages 89–94. Tribun EU, 2010. • Jiˇrí Materna. Linking Czech Verb Valency Lexicon VerbaLex with FrameNet. In Proceedings of 4th Language and Technology Conference, pages 215–219, Pozna´n, 2009 • JiˇríMaterna. Czech Verbs in FrameNet Semantics. In Czech in Formal Grammar, pages 131–139, Lincom, München, 2009. Other author’s publications include: • Jiˇrí Materna. Determination of the topic consistency of doc- uments. In Radim Jiroušek, Jiˇrí Jelínek, editor, Znalosti 2011, pages 148–158. Fakulta elektrotechniky a informatiky, VŠB – Technická univerzita Ostrava, 2011. • JiˇríMaterna and Juraj Hreško. A Bayesian Approach to Query Language Identification. In Pavel Rychlý Aleš Horák, editor, Recent Advances in Slavonic Natural Language Processing, pages 111–116. Tribun EU, 2011. (Contribution: 80 %) v • Jiˇrí Materna. Domain Collocation Identification. In Recent Ad- vances in Slavonic Natural Language Processing, Faculty of Infor- matics, Masaryk University, Brno, 2009. • Lubomír Popelínský and JiˇríMaterna. On key words and key patterns in Shakespeare’s plays. In Znalosti 2009, Nakladatelstvo STU, Bratislava, 2009. (Contribution 20 %) • Jiˇrí Materna. Automatic Web Page Classification. In Recent Advances in Slavonic Natural Language Processing, Faculty of Informatics, Masaryk University, Brno, 2008. • Lubomír Popelínský and JiˇríMaterna. Keyness and Relational Learning. In International conference Keyness in Text, University of Siena, Siena, 2007. (Contribution 30 %) • JiˇríMaterna. Keyness in Shakespeare’s Plays. In Recent Advances in Slavonic Natural Language Processing, Faculty of Informatics, Masaryk University, Brno, 2007. vi ACKNOWLEDGMENTS First of all I would like to thank my supervisor Karel Pala for his support and encouragement on one hand, and his understanding for my duties at the Seznam.cz company on the other hand. Many thanks go to my employer Seznam.cz and all my colleagues for an opportunity to be a part of a mutually enriching environment, where research and applications together result in amazing technological products. I would also like to thank my colleagues from the NLP lab for valuable discussions of the issues of probabilistic semantic frames during the last years. I am very grateful to Tomáš Brázdil and Vladimír Kadlec who provided me with numerous remarks on the draft of this thesis, Steven Schwartz for his comments regarding my English, and André Miede for his nice LATEXtemplate. Finally, I would like to thank my parents and my partner Eliška Mikmeková for their love and support. vii CONTENTS i semantic frames1 1 introduction3 1.1 Lexical Semantics . 4 1.1.1 Homonymy and Polysemy . 5 1.2 Semantic Relations . 6 1.2.1 Synonymy . 6 1.2.2 Hyponymy/hypernymy . 7 1.2.3 Meronymy/holonymy . 7 1.2.4 Antonymy . 7 1.3 Ontologies and Frames . 7 1.3.1 Ontologies . 8 1.3.2 WordNet . 8 1.3.3 Frames . 9 1.4 Probabilistic Frames . 10 1.4.1 Practical Semantic Frames . 11 1.4.2 Related Work . 15 1.5 Structure of the Thesis . 16 2 semantic frames for natural language process- ing 17 2.1 Semantic Roles . 18 2.2 Connection with Syntax . 21 2.3 Verb Valencies and Semantic Frames . 22 2.3.1 Obligatory vs. Facultative Valencies . 22 2.3.2 Verb Classes . 23 2.3.3 Verb-specific vs. General Semantic Roles . 24 2.3.4 Relations Between Verb Valencies and Seman- tic Frames . 24 2.4 Motivation for Identifying Semantic Frames . 26 3 existing frame-based lexical resources 29 3.1 PropBank . 29 3.2 VerbNet . 30 3.3 The Berkeley FrameNet . 30 3.3.1 Semantic Frames . 31 3.3.2 Frame Elements . 32 3.3.3 FrameNet relations . 33 3.3.4 Semantic types . 34 3.4 FrameNets in other languages . 34 3.4.1 SALSA . 34 3.4.2 Spanish FrameNet . 35 3.5 BRIEF . 36 3.6 Vallex . 36 ix x contents 3.7 VerbaLex . 37 ii probabilistic semantic frames 39 4 probabilistic graphical models 41 4.1 Bayesian Networks . 41 4.1.1 Model Representation . 43 4.2 Generative vs. Discriminative models . 45 4.3 Statistical Inference in Bayesian Networks . 48 4.3.1 Variational Inference . 49 4.3.2 Sampling . 50 5 selected probability distributions 59 5.1 The Binomial and the Multinomial distributions . 60 5.2 The Beta and the Dirichlet distributions . 61 5.3 The Gamma Distribution . 64 6 topic models 67 6.1 Latent Semantic Analysis . 68 6.2 Probabilistic Latent Semantic Analysis . 71 6.3 Latent Dirichlet Allocation . 74 7 lda-frames 77 7.1 Model Description . 79 7.2 The Gibbs Sampler for LDA-Frames . 82 7.2.1 Sampling Frames . 85 7.2.2 Sampling Roles . 87 8 non-parametric lda-frames 91 8.1 Dirichlet Process . 91 8.2 Chinese restaurant process and Stick breaking process 93 8.3 Estimating the number of roles . 96 8.4 Estimating the Number of Frames . 98 8.4.1 Hierarchical Dirichlet Process . 98 8.4.2 Fully Non-Parametric Model . 102 8.4.3 Sampling Unbounded Number of Frames . 103 8.5 Gibbs Sampler for Non-Parametric LDA-Frames . 105 9 hyperparameter estimation 109 9.1 Hyperparameter Estimation Method for Parametric LDA-Frames . 110 9.2 Hyperparameter Estimation Method for Non-Para- metric LDA-Frames . 112 10 distributed inference algorithms 117 10.1 Parallel LDA-Frames .
Recommended publications
  • CS460/626 : Natural Language Processing/Speech, NLP and the Web
    CS460/626 : Natural Language Processing/Speech, NLP and the Web Lecture 24, 25, 26 Wordnet Pushpak Bhattacharyya CSE Dept., IIT Bombay 17th and 19th (morning and night), 2013 NLP Trinity Problem Semantics NLP Trinity Parsing Part of Speech Tagging Morph Analysis Marathi French HMM Hindi English Language CRF MEMM Algorithm NLP Layer Discourse and Corefernce Increased Semantics Extraction Complexity Of Processing Parsing Chunking POS tagging Morphology Background Classification of Words Word Content Function Word Word Verb Noun Adjective Adverb Prepo Conjun Pronoun Interjection sition ction NLP: Thy Name is Disambiguation A word can have multiple meanings and A meaning can have multiple words Word with multiple meanings Where there is a will, Where there is a will, There are hundreds of relatives Where there is a will There is a way There are hundreds of relatives A meaning can have multiple words Proverb “A cheat never prospers” Proverb: “A cheat never prospers but can get rich faster” WSD should be distinguished from structural ambiguity Correct groupings a must … Iran quake kills 87, 400 injured When it rains cats and dogs run for cover Should be distinguished from structural ambiguity Correct groupings a must … Iran quake kills 87, 400 injured When it rains, cats and dogs runs for cover When it rains cats and dogs, run for cover Groups of words (Multiwords) and names can be ambiguous Broken guitar for sale, no strings attached (Pun) Washington voted Washington to power pujaa ne pujaa ke liye phul todaa (Pujaa plucked
    [Show full text]
  • Machine Reading = Text Understanding! David Israel! What Is It to Understand a Text? !
    Machine Reading = Text Understanding! David Israel! What is it to Understand a Text? ! • To understand a text — starting with a single sentence — is to:! • determine its truth (perhaps along with the evidence for its truth)! ! • determine its truth-conditions (roughly: what must the world be like for it to be true)! • So one doesn’t have to check how the world actually is — that is whether the sentence is true! • calculate its entailments! • take — or at least determine — appropriate action in light of it (and some given preferences/goals/desires)! • translate it accurately into another language! • and what does accuracy come to?! • ground it in a cognitively plausible conceptual space! • …….! • These are not necessarily competing alternatives! ! For Contrast: What is it to Engage Intelligently in a Dialogue? ! • Respond appropriately to what your dialogue partner has just said (and done)! • Typically, taking into account the overall purpose(s) of the interaction and the current state of that interaction (the dialogue state)! • Compare and contrast dialogue between “equals” and between a human and a computer, trying to assist the human ! • Siri is a very simple example of the latter! A Little Ancient History of Dialogue Systems! STUDENT (Bobrow 1964) Question-Answering: Word problems in simple algebra! “If the number of customers Tom gets is twice the square of 20% of the number of advertisements he runs, and the number of advertisements he runs is 45, what is the number of customers Tom gets?” ! Overview of the method ! . 1 Map referential expressions to variables. ! ! . 2 Use regular expression templates to identify and transform mathematical expressions.
    [Show full text]
  • Automatic Labeling of Troponymy for Chinese Verbs
    Automatic labeling of troponymy for Chinese verbs 羅巧Ê Chiao-Shan Lo*+ s!蓉 Yi-Rung Chen+ [email protected] [email protected] 林芝Q Chih-Yu Lin+ 謝舒ñ Shu-Kai Hsieh*+ [email protected] [email protected] *Lab of Linguistic Ontology, Language Processing and e-Humanities, +Graduate School of English/Linguistics, National Taiwan Normal University Abstract 以同©^Æ與^Y語意關¶Ë而成的^Y知X«,如ñ語^² (Wordnet)、P語^ ² (EuroWordnet)I,已有E分的研v,^²的úË_已øv完善。ú¼ø同的目的,- 研b語言@¦已úË'規!K-文^Y²路 (Chinese Wordnet,CWN),è(Ð供完t的 -文­YK^©@分。6而,(目MK-文^Y²路ûq-,1¼目M;要/¡(ºº$ 定來標記同©^ÆK間的語意關Â,因d這些標記KxÏ尚*T成可L應(K一定規!。 因d,,Ç文章y%針對動^K間的上下M^Y語意關 (Troponymy),Ðú一.ê動標 記的¹法。我們希望藉1句法上y定的句型 (lexical syntactic pattern),úË一個能 ê 動½取ú動^上下M的ûq。透N^©意$定原G的U0,P果o:,dûqê動½取ú 的動^上M^,cº率將近~分K七A。,研v盼能將,¹法應(¼c(|U-的-文^ ²ê動語意關Â標記,以Ê知X,體Kê動úË,2而能有H率的úË完善的-文^Y知 XÇ源。 關關關uuu^^^:-文^Y²路、語©關Âê動標記、動^^Y語© Abstract Synset and semantic relation based lexical knowledge base such as wordnet, have been well-studied and constructed in English and other European languages (EuroWordnet). The Chinese wordnet (CWN) has been launched by Academia Sinica basing on the similar paradigm. The synset that each word sense locates in CWN are manually labeled, how- ever, the lexical semantic relations among synsets are not fully constructed yet. In this present paper, we try to propose a lexical pattern-based algorithm which can automatically discover the semantic relations among verbs, especially the troponymy relation. There are many ways that the structure of a language can indicate the meaning of lexical items. For Chinese verbs, we identify two sets of lexical syntactic patterns denoting the concept of hypernymy-troponymy relation.
    [Show full text]
  • Ontology and the Lexicon
    Ontology and the Lexicon Graeme Hirst Department of Computer Science, University of Toronto, Toronto M5S 3G4, Ontario, Canada; [email protected] Summary. A lexicon is a linguistic object and hence is not the same thing as an ontology, which is non-linguistic. Nonetheless, word senses are in many ways similar to ontological concepts and the relationships found between word senses resemble the relationships found between concepts. Although the arbitrary and semi-arbitrary distinctions made by natural lan- guages limit the degree to which these similarities can be exploited, a lexicon can nonetheless serve in the development of an ontology, especially in a technical domain. 1 Lexicons and lexical knowledge 1.1 Lexicons A lexicon is a list of words in a language—a vocabulary—along with some knowl- edge of how each word is used. A lexicon may be general or domain-specific; we might have, for example, a lexicon of several thousand common words of English or German, or a lexicon of the technical terms of dentistry in some language. The words that are of interest are usually open-class or content words, such as nouns, verbs, and adjectives, rather than closed-class or grammatical function words, such as articles, pronouns, and prepositions, whose behaviour is more tightly bound to the grammar of the language. A lexicon may also include multi-word expressions such as fixed phrases (by and large), phrasal verbs (tear apart), and other common expressions (merry Christmas!; teach someone ’s grandmother to suck eggs; Elvis has left the building). ! " Each word or phrase in a lexicon is described in a lexical entry; exactly what is included in each entry depends on the purpose of the particular lexicon.
    [Show full text]
  • Lexical Scoring System of Lexical Chain for Quranic Document Retrieval
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by UKM Journal Article Repository GEMA Online® Journal of Language Studies 59 Volume 18(2), May 2018 http://doi.org/10.17576/gema-2018-1802-05 Lexical Scoring System of Lexical Chain for Quranic Document Retrieval Hamed Zakeri Rad [email protected] Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia Sabrina Tiun [email protected] Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia Saidah Saad [email protected] Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia ABSTRACT An Information Retrieval (IR) system aims to extract information based on a query made by a user on a particular subject from an extensive collection of text. IR is a process through which information is retrieved by submitting a query by a user in the form of keywords or to match words. In the Al-Quran, verses of the same or comparable topics are scattered throughout the text in different chapters, and it is therefore difficult for users to remember the many keywords of the verses. Therefore, in such situations, retrieving information using semantically related words is useful. In well-composed documents, the semantic integrity of the text (coherence) exists between the words. Lexical cohesion is the results of chains of related words that contribute to the continuity of the lexical meaning found within the text are a direct result of text being about the same thing (i.e. topic, etc.). This indicates that using an IR system and lexical chains are a useful and appropriate method for representing documents with concepts rather than using terms in order to have successful retrieval based on semantic relations.
    [Show full text]
  • WN-EWN Relations
    A Bottom-up Comparative Study of EuroWordNet and WordNet 3.0 Lexical and Semantic Relations Maria Teresa Pazienzaa, Armando Stellatoa, Alexandra Tudoracheab a) AI Research Group, Dept. of Computer Science, b) Dept. of Cybernetics, Statistics and Economic Systems and Production University of Rome, Informatics, Academy of Economic Studies Bucharest Tor Vergata Calea Dorobanţilor 15-17, 010552, Via del Politecnico 1, 00133 Rome, Italy Bucharest, Romania {pazienza,stellato,tudorache}@info.uniroma2.it [email protected] Abstract The paper presents a comparative study of semantic and lexical relations defined and adopted in WordNet and EuroWordNet. This document describes the experimental observations achieved through the analysis of data from different WordNet versions and EuroWordNet distributions for different languages, during the development of JMWNL (Java Multilingual WordNet Library), an extensible multilingual library for accessing WordNet-like resources in different languages and formats. The goal of this work was to realize an operative mapping between the relations defined in the two lexical resources and to unify library access and content navigation methods for both WordNet and EuroWordNet. The analysis focused on similarities, differences, semantic overlaps or inclusions, factual misinterpretations and inconsistencies between the intended and practical use of each single relation defined in these two linguistic resources. The paper details with examples the produced mapping, discussing required operations which implied merging, extending or simply keeping separate the examined relations. This document describes the experimental observations 1. Introduction achieved through the analysis of data from different We introduce a comparative study of semantic and WordNet versions and EuroWordNet distributions for lexical relations defined and adopted in two renowned different languages, during the development of JMWNL.
    [Show full text]
  • University of Hagen at CLEF 2004: Indexing and Translating Concepts for the GIRT Task
    University of Hagen at CLEF 2004: Indexing and Translating Concepts for the GIRT Task Johannes Leveling, Sven Hartrumpf Intelligent Information and Communication Systems University of Hagen (FernUniversitat¨ in Hagen) 58084 Hagen, Germany {Johannes.Leveling,Sven.Hartrumpf}@fernuni-hagen.de Abstract This paper describes the work done at the University of Hagen for our participation at the German Indexing and Retrieval Test (GIRT) task of the CLEF 2004 evaluation campaign. We conducted both monolingual and bilingual information retrieval experiments. For monolin- gual experiments with the German document collection, the focus is on applying and compar- ing three indexing methods targeting full word forms, disambiguated concepts, and extended semantic networks. The bilingual experiments for retrieving English documents for German topics rely on translating and expanding query terms based on a ranking of semantically related English terms for a German concept. English translations are compiled from heterogeneous resources, including multilingual lexicons such as EuroWordNet and dictionaries available online. 1 Introduction This paper investigates the performance of different indexing methods and automated retrieval strategies for the second participation in the University of Hagen in the evaluation campaign of the Cross Language Evaluation Forum (CLEF). In 2003, retrieval strategies based on generating query variants for a natural language (NL) query were compared (Leveling, 2003). The result of our best experiment in 2003 with respect to mean average precision (MAP) was 0.2064 for a run using both topic title and description. Before presenting the different approaches and results for monolingual and bilingual retrieval experi- ments with the GIRT documents (German Indexing and Retrieval Test database, see Kluck and Gey (2001)) in separate sections, a short overview of the analysis and transformation of natural language queries and documents is given.
    [Show full text]
  • Neural Methods Towards Concept Discovery from Text Via Knowledge Transfer
    Neural Methods Towards Concept Discovery from Text via Knowledge Transfer DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Manirupa Das, B.E., M.S. Graduate Program in Computer Science and Engineering The Ohio State University 2019 Dissertation Committee: Prof Rajiv Ramnath, Advisor Prof Eric Fosler-Lussier, Advisor Prof Huan Sun c Copyright by Manirupa Das 2019 ABSTRACT Novel contexts, consisting of a set of terms referring to one or more concepts, often arise in real-world querying scenarios such as; a complex search query into a document retrieval system or a nuanced subjective natural language question. The concepts in these queries may not directly refer to entities or canonical concept forms occurring in any fact-based or rule-based knowledge source such as a knowledge base or on- tology. Thus, in addressing the complex information needs expressed by such novel contexts, systems using only such sources can fall short. Moreover, hidden associa- tions meaningful in the current context, may not exist in a single document, but in a collection, between matching candidate concepts having different surface realizations, via alternate lexical forms. These may refer to underlying latent concepts, i.e., exist- ing or conceived concepts or semantic classes that are accessible only via their surface forms. Inferring these latent concept associations in an implicit manner, by transfer- ring knowledge from the same domain { within a collection, or from across domains (different collections), can potentially better address such novel contexts. Thus latent concept associations may act as a proxy for a novel context.
    [Show full text]
  • Preliminaries to the Preparation of a Word Net for Tamil
    LANGUAGE IN INDIA Strength for Today and Bright Hope for Tomorrow Volume 14:3 March 2014 ISSN 1930-2940 Managing Editor: M. S. Thirumalai, Ph.D. Editors: B. Mallikarjun, Ph.D. Sam Mohanlal, Ph.D. B. A. Sharada, Ph.D. A. R. Fatihi, Ph.D. Lakhan Gusain, Ph.D. Jennifer Marie Bayer, Ph.D. S. M. Ravichandran, Ph.D. G. Baskaran, Ph.D. L. Ramamoorthy, Ph.D. C. Subburaman, Ph.D. (Economics) Assistant Managing Editor: Swarna Thirumalai, M.A. Malayalam WordNet Dr. S. Rajendran & Dr. Soman, K.P ==================================================================== Abstract Work on Malayalam WordNet was initiated in Amrita Vishvavidya Peetam, Coimbatore in December, 2011 as a part of the project entitled “Development of Dravidian WordNet: An Integrated WordNet for Telugu, Tamil, Kannada and Malayalam”, funded by Department of Information Technology, MCIT, Govt. of India. The main objective of the project is to build WordNets for Dravidian languages by making use of the already built Hindi WordNet under the project scheme IndowordNet. Hindi WordNet has been built based on Princeton English WordNet which is a component of EuroWordNet. The main objective of EuroWordNet is to develop an extensive and high quality of multilingual database with WordNets for several languages (mainly European Languages such as French, German, Czech, Italian, etc.) in a cost-effective manner. On similar line, IndoWordNet is being built for Indian languages. Malayalam WordNet is a component of Dravidian WordNet which in turn is the component of IndoWordNet. Malayalam WordNet is an online lexical database. It is useful for many applications of Natural Language Processing. Language in India www.languageinindia.com ISSN 1930-2940 14:3 March 2014 Dr.
    [Show full text]
  • Automatic Extraction of Semantic Relationships for Wordnet by Means of Pattern Learning from Wikipedia?
    Automatic extraction of semantic relationships for WordNet by means of pattern learning from Wikipedia? Maria Ruiz-Casado, Enrique Alfonseca and Pablo Castells Computer Science Dep., Universidad Autonoma de Madrid, 28049 Madrid, Spain {Maria.Ruiz,Enrique.Alfonseca,Pablo.Castells}@uam.es Abstract. This paper describes an automatic approach to identify lexical pat- terns which represent semantic relationships between concepts, from an on-line encyclopedia. Next, these patterns can be applied to extend existing ontologies or semantic networks with new relations. The experiments have been performed with the Simple English Wikipedia and WordNet 1.7. A new algorithm has been devised for automatically generalising the lexical patterns found in the encyclo- pedia entries. We have found general patterns for the hyperonymy, hyponymy, holonymy and meronymy relations and, using them, we have extracted more than 1200 new relationships that did not appear in WordNet originally. The precision of these relationships ranges between 0.61 and 0.69, depending on the relation. 1 Introduction The exponential growth of the amount of data in the World Wide Web (WWW) requires the automatising of processes like searching, retrieving and maintaining information. One of the difficulties that prevents the complete automatising of those processes [1] is the fact that the contents in the WWW are presented mainly in natural language, whose ambiguities are hard to be processed by a machine. The Semantic Web (SW) constitutes an initiative to extend the web with machine readable contents and automated services far beyond current capabilities [2]. A com- mon practise is the annotation of the contents of web pages using an ontology.
    [Show full text]
  • Designing a Model of Contexts for Word-Sense Disambiguation in a Semantic Network
    Designing a Model of Contexts for Word-Sense Disambiguation in a Semantic Network Egor Dubenetskiy[0000−0002−4681−5518], Evgenij Tsopa[0000−0002−7473−3368], Sergey Klimenkov[0000−0001−5496−6765], and Alexander Pokid[0000−0003−1733−4042] Computer Science Department, ITMO University, Saint Petersburg, Russia https://en.itmo.ru/en/ {egor.dubenetskiy,evgenij.tsopa,serge.klimenkov,alexander.pokid}@cs.ifmo.ru Abstract. This paper is dedicated to the problem of word-sense disam- biguation in natural language processing. In this study, relations between a word’s sense and its surrounding context are examined and a proba- bilistic model of a context is designed. An approach for testing such a model and developing a disambiguation algorithm is described as well. Keywords: Word-sense disambiguation · Resolving homonymy · Se- mantic analysis · Semantic network · Natural language processing 1 Introduction One of the crucial problems of artificial intelligence is natural language pro- cessing. To get closer to the solution of this problem, semantic networks are created that store semantic information regarding one or several domains, and algorithms over these networks are developed.[1] One of the difficulties met in semantic analysis is word-sense disambiguation. The WSD problem is to determine what sense corresponds to an ambiguous word depending on the context. It is caused by such phenomena in languages like homonymy and polysemy.[2]. Nowadays, two types of approaches[3] are con- tinuously improved and developed: supervised WSD[4,5], where manually sense- annotated corpora are used for models training, and knowledge-based[6,7], when developed algorithms rely on artificially constructed structures or content.
    [Show full text]
  • A Bottom-Up Comparative Study of Eurowordnet and Wordnet 3.0 Lexical and Semantic Relations
    A Bottom-up Comparative Study of EuroWordNet and WordNet 3.0 Lexical and Semantic Relations Maria Teresa Pazienzaa, Armando Stellatoa, Alexandra Tudoracheab a) AI Research Group, Dept. of Computer Science, b) Dept. of Cybernetics, Statistics and Economic Systems and Production University of Rome, Informatics, Academy of Economic Studies Bucharest Tor Vergata Calea Dorobanţilor 15-17, 010552, Via del Politecnico 1, 00133 Rome, Italy Bucharest, Romania {pazienza,stellato,tudorache}@info.uniroma2.it [email protected] Abstract The paper presents a comparative study of semantic and lexical relations defined and adopted in WordNet and EuroWordNet. This document describes the experimental observations achieved through the analysis of data from different WordNet versions and EuroWordNet distributions for different languages, during the development of JMWNL (Java Multilingual WordNet Library), an extensible multilingual library for accessing WordNet-like resources in different languages and formats. The goal of this work was to realize an operative mapping between the relations defined in the two lexical resources and to unify library access and content navigation methods for both WordNet and EuroWordNet. The analysis focused on similarities, differences, semantic overlaps or inclusions, factual misinterpretations and inconsistencies between the intended and practical use of each single relation defined in these two linguistic resources. The paper details with examples the produced mapping, discussing required operations which implied merging, extending or simply keeping separate the examined relations. This document describes the experimental observations 1. Introduction achieved through the analysis of data from different We introduce a comparative study of semantic and WordNet versions and EuroWordNet distributions for lexical relations defined and adopted in two renowned different languages, during the development of JMWNL.
    [Show full text]