<<

Jesper Segeblad [email protected] Semantic analysis in technology HT 13

WordNet – Structure and use in natural language processing

Abstract

There are several electronic , thesauri, lexical , and so forth today. WordNet is one of the largest and most widely used of these. It has been used for many natural language processing tasks, including , sense disambiguation and . However, in these times where machine learning is the most prevalent method for solving problems in computational linguistics, and when more specialized resources for lexical are built, can word WordNet still play a role in development of systems for analysis of natural language?

1. WordNet as a lexical

1.1 Background

Before the 1990s, most of the dictionaries for English existed only in paper form. The dictionaries that were available in electronic form was limited to a few groups of researchers. This was something that hindered much work to be done in certain areas of computational linguistics, for example word sense disambiguation (WSD).

In 1993, WordNet was introduced. It is a lexical database, organized as a . The development began in 1985 at Princeton University by a group of psychologists and linguists, and the university still is the maintainer of this lexical database. Even though it was not created with the intention to serve as knowledge source for tasks in computational linguistics, it has been used as such. It has been widely used as a for different tasks, have been ported to several different , and has spawned many different subsets. One task that it has been widely used for is the previously mentioned WSD.

1.2 Structure

WordNet consists of four separate databases, one for , one for , one for adjectives and one for . It does not include closed class . The current version available for download is WordNet 3.0, which was released in December 2006. It contains 117,097 nouns, 22,141 adjectives, 11,488 verbs and 4,601 adverbs. [2] There is a later release, 3.1, which is available for online usage only.

The basic structure is synsets. These are sets of , or more correct, near-synonyms, since there exists none to few true synonyms. Synsets contains a set of lemmas, and these sets are tagged with the sense they represent. These senses can be said to be concepts, all of the lemmas (or words), can be said to express the same concept. Word forms which have different meanings appear in different synsets. For example the bank, has 10 different senses in WordNet, and thus it Jesper Segeblad [email protected] Semantic analysis in language technology HT 13 appear in 10 different synsets. It also appear as in 8 different synsets. Each of these synsets are also connected in some way to other synsets, expressing some kind of relation. Which these relations are depend on the part of speech of the word itself, although the hypernym/hyponym (the is-a relation) relationship is the most common, and appears for both nouns and verbs (hypoyms for verbs are known as troponyms, and the difference between them leading to the different names will be expanded on below).

One thing that WordNet does not take in to account is pronunciation, which can be observed by looking at the noun bass. The pronunciation differs whether talking about bass in the sense of the low tone or the instrument, or talking about the fish bass.

1.2.1 Nouns

Nouns have the richest set of relations of all parts of speech represented in WordNet, with 12 different relations. As previously stated, the hyponym/hypernym relation is the most frequent used one. For example, if we look at the noun bass again (which have 8 different senses), now in the context of sea bass, it is a saltwater fish, which is a kind of a seafood, which is a kind of solid food, and so on. These relations are also transitive, which means that sea bass is a type of food, as much as it is a type of saltwater fish.

Sense 4 sea bass, bass => saltwater fish => seafood => food, solid food => solid => matter => physical entity => entity Table 1: Hypernyms of bass in the sense of sea bass.

WordNet also separates the hyponyms between types and instances. A chair is a type of furniture. Hesse, however is not a type of author, but an instance of author. So an instance is a specific form of hyponyms, and these instances are usually proper nouns, describing a unique entity, such as persons, cities and companies. These instances goes both ways, just like the types.

Meronymi, the part-of relationship is divided into three different types, member meronymi, part meronymi and substance meronymi. It also has it's counterpart, just like hyponyms, holonymi. Where meronymi is has-part, holonymi is part-of. And just like homonyms, meronyms are a transitive relationship. If a tree has branches, and a branch has leaves, the tree has leaves.

Part meronymi, which is the relationship most commonly associated with the word, describes parts of an entity.

Substance meronymi describes substances contained in an entity. For example, using the word water in the sense of the chemical substance H2O, it has substance hydrogen and substance oxygen. Jesper Segeblad [email protected] Semantic analysis in language technology HT 13

The last subset of meronymi, member meronymi, describes the relationship of belonging to a larger kind of group. Looking at the word tree again, we can see that it is a member of the entity forest, wood and woods. See table 2 for a description of the different types of meronymis.

Part meronymi: Sense 1 tree HAS PART: stump, tree stump HAS PART: crown, treetop HAS PART: limb, tree branch HAS PART: trunk, tree trunk, bole HAS PART: burl

Substance meronymi: Sense 1 water, H2O HAS SUBSTANCE: hydrogen, H, atomic number 1 HAS SUBSTANCE: oxygen, O, atomic number 8

Member meronymi: Sense 1 forest, wood, woods HAS MEMBER: underbrush, undergrowth, underwood HAS MEMBER: tree Table 2: Different types of meronymi used in WordNet. Antonyms describe words that are semantically opposed. If you are dead, you can not be a alive. However, not all antonyms have to rule out one another. Even though poor and rich are antonyms, just saying that one is rich does not automatically mean that they are poor.

1.2.2 Verbs

Verbs, just like nouns, have the hypernym relationship. Where the counterpart to hypernyms in the case of nouns is called hyponyms, this relationship among verbs are called troponyms. These goes from the event to a superordinate event, and from an event to a subordinate event, respectively. Troponyms can also be described as in which manner something is done, therefor explaining the difference of names (for example, sneak is a tropnym of walk). Antonymi also exists for verbs, and functions the same way, stop is an antonym of start.

The third relation, entails goes from an event to an event it entails. Entailment is used in pragmatics to describe a relationship between sentences, where the truth condition of one sentence depends on the truth of the other. If sentence A is true, then sentence B also has to be true. For example “The criminal was sentenced to death” (A), and “The criminal is dead” (B). If A is true, then B also has to be true. This is the kind of relationship described by entails in WordNet. If you snore, you are also sleeping, which is represented as an entails relation of the two words, and thus you have an entails mapping from snore to sleep. Jesper Segeblad [email protected] Semantic analysis in language technology HT 13

1.2.3 Adjectives and adverbs

Adjectives are mostly organized in the terms of antonymi. As in the case of nouns and verbs, these are words which have meanings that are semantically opposed. As all words in WordNet, they are also part of a synset. The other adjectives in this particular synset also have their antonyms, and thus the antonyms of the other words become indirect antonyms for the synonyms.

Pertainyms is a relation which points the adjective to the nouns that they were derived from. This is one of the relations that cross the part of speech, though there are a few rare cases in which it points to another adjective.

The amount of adverbs are quite small. This depends on the fact that most of the adverbs in English are derived from adjectives. Those that does exist are ordered mostly in the same way that adjectives, with antonyms. They also have a relationship that is like the pertainym relation of adjectives, which also is a cross part of speech pointer, and points to the adjective that they were derived from.

1.2.4 Relations across part of speech

Most of the relations in WordNet are relations among words of the same part of speech. There is however some pointers across the subfields the part of speeches it consists of. One has already been mentioned, pertainyms, which points from an adjective to the noun that it was derived from. Other than that, there are pointers that points to semantically similar words which share the same stem, called derivationally related form. For many of these pairs of nouns and verbs, the thematic role is also described. The verb kill has a pointer to the noun killer, and killer would be the agent of kill.

2. Other knowledge resources

As previously stated, WordNet has been translated to other languages, and has spawned subsets too, such as VerbNet. But it is not the only resource for semantic information that can be used for NLP. On one hand, there are the more traditional dictionaries in electronic form such as Roget's International Thesauri, on the other, there are resources such as FrameNet[12], PropBank[13] and ConceptNet[11] that contains different types of semantic information than WordNet, but can sometimes be used for the same tasks.

FrameNet is a project spawned at Berkley University, with the goal of building a lexical database for English that is both machine- and human-readable. It is based on how words are used in real world text. It consists of around 10,000 word senses, and around 170,000 sentences that are manually annotated. The sentences themselves can prove to be very helpful in terms of many NLP tasks such as and , and gives big amounts of data to train and test on. Where the basic structure and idea behind WordNet are synsets, FrameNet's equivalent are frames. This stems from the theory of Frame Semantics, where the meaning of most words can be understood from a semantic frame, which is a description of some kind of event, relation, or entity and the participants. Thus, words in FrameNet are organized into these frames, describing the event mentioned. Jesper Segeblad [email protected] Semantic analysis in language technology HT 13

PropBank has some similarities with FrameNet, mostly because they are both focused around sentences, rather than only words. These sentences are tagged with semantic roles, though not in the traditional sense, but rather proto-roles. Since there is hard to agree with which semantic roles to be used, proto-roles are used to be able to provide a more abstract set of roles, more specific to each verb. In PropBank, argument for verbs are labeled with Arg0, Arg1, Arg2 etc, where Arg0 would be a proto-agent, and Arg1 would be a proto-patient. The other arguments carry different roles depending on the verb.

ConceptNet is a semantic network, similar to WordNet, containing information on concepts. It is an attempt at building a network containing real-world knowledge. For example, when looking up coffee, one can get the information that it is served hot, it is likely to find it at a coffee shop and at meetings, that cafe is related to coffe, having something to brew it in is a prerequisite for it and so on. This is somewhat different to the information contained in WordNet, being more focused on understanding of concepts rather than just listing senses and definitions of these.

3. Using WordNet for Natural Language Processing

There are several subfields in natural language processing which can benefit from having a large lexical database, especially one as big and extensive as WordNet. Obviously, many semantic applications can draw benefits from using WordNet, including WSD, question answering and sentiment analysis. Many papers have been published regarding WordNet and WSD, exploring different approaches and algorithms, which is the main field for using this. In fact, WordNet can be said to be the de facto standard knowledge source for WSD in English.[4] This success depends on several factors. It is not domain specific, it is very extensive and publicly available.

Since WSD has been the subfield which has used WordNet most extensively, this is what will be focused on here. Some other fields where it can, and have been used will also be brought up, in a somewhat briefer extent. Another interesting mention is that there do exists packages to access WordNet in several programming languages, including and Python. For Python, the Natural Language Tool Kit (NLTK), which offers many modules and tools to analyze and process natural language and is widely used, has tools for using WordNet, such as finding synsets and other relations between words.

WordNet can be used to find between words, using the hierarchical structure among the hyponym/hypernym and meronym/holonym relations. These relations can be understood as a tree, and finding the similarity between words is done by counting the steps that has to be taken from one word to another. Several different algorithms for this has been proposed. This can be useful for e.g information retrieval, and research has been done to see whether this can improve systems.[8]

Question answering is another application that can benefit from WordNet, due to the usefulness of a semantic analysis of the question asked. Much of the information in a sentence uttered by a human is not explicit, but implicit. For example, this can be understood when looking at the example of entailment in the description of verb relations. Research have been done, with the help of WordNet authors, to extend WordNet to support question answering.[9]

Work have also been done where WordNet is proposed to be used for selectional restriction during Jesper Segeblad [email protected] Semantic analysis in language technology HT 13 semantic role labeling.[10]

3.1 WordNet for Word Sense Disambiguation

WSD is a field which has been around since humans have tried to process natural language with computers. It is has been described as an AI-complete problem and is considered to be an intermediate step in many NLP tasks. It is a field for retrieving the right sense of a ambiguous word in a given context. Finer-grained disambiguation is a problem where there are much work left to be done.

The two main approaches to solving this problem are knowledge-based methods and supervised methods. Supervised methods utilizes machine learning to train a classifier on already annotated examples. These methods suffer from sparseness in data to train on, in contrast to syntactic for example, where there exists many resources of tagged data to work with. SemCor is a subset of the Brown Corpus, tagged with senses from WordNet. 186 files out of the 500 that constitutes the Brown Corpus have tags for all of the content words (nouns, verbs, adjectives and adverbs) and another 166 files have tags for the verbs. Even if this may be sufficient for evaluation, it is not enough for building a robust system for WSD.[3] The knowledge-based methods use some kind of knowledge source, such as WordNet, to retrieve word senses. It is for these methods that WordNet has been used extensively.

WordNet keeps occurring in papers regarding WSD to this date. Even though the it is hard to build robust systems using supervised methods, the knowledge-based ones do not perform better. Due to this, approaches to extend the knowledge contained in WordNet have been proposed. They range from semantically tagging the glosses in WordNet to enrich the semantic relations, to extracting knowledge from Wikipedia. Combining WordNet with ConceptNet to improve performance have also been proposed.[5]

4. Discussion

WordNet is an impressive database, with its large amount of words and the encoding of the relations. Also being freely available makes it very practical to use for natural language processing, just as it have been. However there are quite a few things that may speak against it. The very fine-grained distinctions in the database can be problematic for several tasks. Difficulty, for example, have four different senses in WordNet, all of them very similar, and can be hard to set apart, just not for computers, but also for humans. As such, not all senses may be relevant to disambiguate a word. Other problems may be that it was mainly annotated and tagged by humans, which may produce some inconsistencies, and that it was not produced just to solve NLP tasks.

In comparison to the other resources brought up earlier, they do differ in ways compared to WordNet. They encode different types of semantic relations, in different ways, and for different purposes. None of them have been made to compete with WordNet, but to make more semantic knowledge available to use for different purposes. ConceptNet for example, have a richer world-knowledge than WordNet. PropBank may not have same type of encodings as WordNet, but is more specific to a certain kind of task, namely semantic role labeling. These resources are a bit more specialized than WordNet, and might not be suitable for everything that WordNet has been Jesper Segeblad [email protected] Semantic analysis in language technology HT 13 used for.

WordNet is still widely used by people working in semantic natural language processing, as can be understood when reading papers, especially regarding WSD. This can be seen in recent research, where WordNet have not been abandoned, but instead been used in combination with other resources, or has been tried to be improved in different ways. And since WordNet 3.0, it also contains a corpus of semantically annotated disambiguated glosses, which itself can prove to be useful.[8] WordNet will be used for a time to come for WSD, mostly because the sparseness of data for supervised methods. Improvement of the lexical knowledge and algorithms to use for this may be the best way to go for the time being.

5. Conclusions

WordNet has a long tradition in natural language processing, and still has purposes to serve. Even if WordNet did not come to exist to get a sense inventory to be used for NLP, the creators have shown great interest to this use of it. Even if it will not be directly used for applications alone, the sense inventory and the extensive relations hold a value to the NLP community, such as WordNet senses being used for the WSD task in SEMEVAL. Also, combination of resources, where WordNet can be one, might prove useful to many different NLP tasks. So even though there do exist different databases annotated with some kind of , WordNet still has purposes to serve, and for the time being it is a good option to use as a knowledge resource for tasks where one is needed. Jesper Segeblad [email protected] Semantic analysis in language technology HT 13

Bibliograhy

[1] George A. Miller, Richard Beckwith, Christine, Fellbaum, Derek Gross & Katherine Miller, Introduction to WordNet: An On-line Lexical Database (1993)

[2] Daniel Jurafsky & James H. Martin, Speech and Language Processing (Pe- arson Education International, 2009)

[3] Eneko Agirre & Philip Edmonds, Word Sense Disambiguation: Algorithms and Applications (Springer, 2006)

[4] Robert Navigli, Word Sense Disambiguation: A Survey (2009)

[5] Junpeng Chen & Juan Liu, Combining ConceptNet and WordNet for Word Sense Disambiguation (2011)

[6] Jorge Morato, Miguel Ángel Marzal, Juan Lloréns & José Moreiro, Wordnet Applications (2003)

[7] Julian Szymański & Włodzisław Duch, Annotating Words Using WordNet Semantic Glosses (2012)

[8] Giannis Varelas, Epimenidis Voutsakis, Paraskevi Raftopoulou, Euripides G.M. Petrakis, Evangelos E. Milios, Semantic Similarity Methods in WordNet and Application to Information Retrieval on the Web (2005)

[9] Peter Clark, Christiane Fellbaum, Jerry Hobbs, Using and Extending WordNet to Support Question Answering (2008)

[10] Lei Shi & Rada Mihalcea, Putting Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust Semantic Parsing (2005)

[11] http://conceptnet5.media.mit.edu/

[12] http://framenet.icsi.berkeley.edu/

[13] http://verbs.colorado.edu/~mpalmer/projects/ace.html