Using a Weighted Semantic Network for Lexical Semantic Relatedness Reda Siblini Leila Kosseim Concordia University Concordia University 1400 de Maisonneuve Blvd. West 1400 de Maisonneuve Blvd. West Montreal, Quebec, Canada, H3G 1M8 Montreal, Quebec, Canada, H3G 1M8 [email protected] [email protected] Abstract lexicon-based, corpus-based, and hybrid ap- proaches. The measurement of semantic relatedness Lexicon-based methods use the features of a between two words is an important met- lexicon to measure semantic relatedness. The ric for many natural language processing most frequently used lexicon is Princeton’s applications. In this paper, we present a WordNet (Fellbaum, 1998) which groups words novel approach for measuring semantic re- into synonyms sets (called synsets) and includes latedness that is based on a weighted se- various semantic relations between those synsets, mantic network. This approach explores in addition to their definitions (or glosses). the use of a lexicon, semantic relation WordNet contains 26 semantic relations that types as weights, and word definitions include: hypernymy, hyponymy, meronymy, and as a basis to calculate semantic related- entailment. ness. Our results show that our approach To measure relatedness, most of the lexicon-based outperforms many lexicon-based methods approaches rely on the structure of the lexicon, to semantic relatedness, especially on the such as the semantic link path, depth (Leacock TOEFL synonym test, achieving an accu- and Chodorow, 1998; Wu and Palmer, 1994), racy of 91.25%. direction (Hirst and St-Onge, 1998), or type (Tsat- saronis et al., 2010). Most of these approaches 1 Introduction exploit the hypernym/hyponym relations, but a few approaches have also included the use of Lexical semantic relatedness is a measurement other semantic relations. Leacock and Chodorow of how two words are related in meaning. Many (1998) for example, computed semantic related- natural language processing applications such as ness as the length of the shortest path between textual entailment, question answering, or infor- synsets over the depth of the taxonomy. Wu and mation retrieval require a robust measurement of Palmer (1994) also used the hyponym tree to lexical semantic relatedness. Current approaches calculate relatedness by using the depth of the to address this problem can be categorized into words in the taxonomy and the depth of the least three main categories: those that rely on a lexicon common superconcept between the two words. and its structure, those that use the distributional Hirst and St-Onge (1998), on the other hand, used hypothesis on a large corpus, and hybrid ap- the lexical chains between words based on their proaches. synsets and the semantic edges that connect them. In this paper, we propose a new lexicon-based In addition to using the hypernym relations, they approach to measure semantic relatedness that is classified the relations into classes: “extra strong” based on a weighted semantic network that in- for identical words, “strong” for synonyms, cludes all 26 semantic relations found in WordNet “medium strong” for when there is a path between in addition to information found in the glosses. the two, and “not related” for no paths at all. The semantic measurement is then based on the path length and the path direction changes. Tsatsaronis 2 Related Work et al. (2010) used a combination of semantic path Approaches to computing semantic relatedness length, node depth in the hierarchy, and the types can be classified into three broad categories: of the semantic edges that compose the path. 610 Proceedings of Recent Advances in Natural Language Processing, pages 610–618, Hissar, Bulgaria, 7-13 September 2013. Figure 1: Example of the semantic network around the word car. On the other hand, corpus-based approaches of relations considered includes not only the hy- rely mainly on distributional properties of words ponym/hypernym relations but also all other avail- learned from a large corpus to compute semantic able semantic relations found in WordNet in addi- relatedness. Such as the work of Finkelstein et tion to word definitions. al. (2001) that used Latent Semantic Analysis, and the work of Strube and Ponzetto (2006) and 3.1 WordNet’s Semantic Network Gabrilovich and Markovitch (2007), which both To implement our idea, we created a weighted and used the distributional hypothesis on Wikipedia. directed semantic network based on the content of Finally, hybrid approaches use a combination of WordNet. To build the semantic network, we used corpus-based and lexicon-based methods. For WordNet 3.1’s words and synsets as the nodes of example, the approach proposed by Hughes and the network. Each word is connected by an edge Ramage (2007) used a random walk method over to its synsets, and each synset is in turn connected a lexicon-based semantic graph supplemented to other synsets based on the semantic relations with corpus-based probabilities. Another example included in WordNet. In addition each synset is is the work of Agirre et al. (2009) that used a connected to the content words contained in its supervised machine learning approach to combine gloss. For example, Figure 1 shows part of the three methods: WordNet-based similarity, a bag semantic network created around the word car. In of word based similarity, and a context window this graph, single-line ovals represent words, while based similarity. double-line ovals represent synsets. The approach presented in this paper belongs to By mining WordNet entirely, we created a net- the lexicon-based category. However, as opposed work of 265,269 nodes connected through a total to the typical lexicon-based approaches described of 1,919,329 edges. The nodes include all words above, our approach uses all 26 semantic relations and synsets, and the edges correspond to all 26 se- found in WordNet in addition to information mantic relations in WordNet in addition to the re- found in glosses. The novelty of this approach is lation between a synset and every content word of that these relations are used to create an explicit a synset definition. semantic network, where the edges of the network representing the semantic relations are weighted 3.2 Semantic Classes of Relations according to the type of the semantic relation. The To compute the semantic relatedness between semantic relatedness is computed as the lowest nodes in the semantic network, it is necessary to cost path between a pair of words in the network. take into consideration the semantic relation in- volved between two nodes. Indeed, WordNet’s 26 3 Our Approach semantic relations do not contribute equally to the semantic relatedness between words. The hyper- Our method to measure semantic relatedness is nym relation (relation #2), for example, is a good based on the idea that the types of relations that indicator of semantic relatedness; while the rela- relate two concepts are a suitable indicator of the tion of member of this domain - topic (relation semantic relatedness between the two. The type #15) is less significant. This can be seen in Fig- 611 Category Weight Semantic Relations in WordNet Similar α antonym, cause, entailment, participle of verb, pertainym, similar to, verb group Hypernym 2 × α derivationally related, instance hypernym, hypernym Sense 4 × α + β lemma-synset Gloss 6 × α lemma-gloss content words P art 8 × α holonym (part, member, substance), inverse gloss, meronym (part, member, substance) Instance 10 × α instance hyponym, hyponym Other 12 × α also see, attribute, domain of synset (topic, region, usage), member of this domain (topic, region, usage) Table 1: Relations Categories and Corresponding Weights. ure 1, for example, where the word car is more class of relations includes relations that are the closely related to Motor vehicle than to Renting. most useful to compute semantic relatedness as In order to determine the contribution of each re- per our manual corpus analysis and are the rarest lation, we compared a manually created set of available relations in the semantic network and 210 semantic relations for their degree of related- hence was assigned the lowest weight of all cat- ness. For example, for the concept car we have egories of relations: α. compared the sense of automobile with the hyper- The second category of semantic relations is the nym motor vehicle, the gloss word wheel, the part Hypernym which includes WordNet’s relations of meronym air bag, the member of this topic rent- hypernym, instance hypernym and derivationally ing, and another sense of car such as a cable car. related. Being less important than the similar rela- This comparison has lead us to classify the rela- tions to compute relatedness, as shown in Table 1, tions into seven categories, and rank these cate- the Hypernym category was assigned a weight of gories from the most related category to the least (2 × α). related one as follows: Similar (highest contribu- The Sense category represents the relationship be- tion), Hypernym, Sense, Gloss, Part, Instance, and tween a word and its synset. Because a word can Other (lowest contribution). By classifying Word- belong to several synsets, in order to favor the Net’s relations into these classes, we are able to most frequent senses as opposed to the infrequent weight the contribution of a relation based on the ones, the weight of this category is modulated by class it belongs to, as opposed to assigning a con- a factor β. Specifically, we use (4 × α + β), where tributory weight to each relations. For example, all β is computed as the ratio of the frequency of relations of type Similar will contribute equally to the sense number in WordNet over the maximum the semantic relatedness of words, and will con- number of senses for that word. tribute more than any relations of the class Hyper- The fourth category of semantic relations is the nym.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-