
Sense Based Organization of Descriptive Data M. Shahriar Hossain, Monika Akbar, and Rafal A. Angryk Abstract— In this paper we propose a new technique allowing have chosen WordNet as the background knowledge to to map descriptive data into relative distance space, which is retrieve corpus independent similarity measures because it is based primarily on senses of the terms stored in our data. We a lexical reference system whose design is inspired by current use WordNet ontology to retrieve multiple senses of words with psycholinguistic theories of human lexical memory [2]. the aim of multidimensional representation of data. The focus of The whole paper is organized in several sections. Section II this work is mainly on the slicing of available ontology into multiple dimensions where each dimension reflects contains basic literature on WordNet and describes different approximation of a single general sense reflecting broad context proximity measures between words and synsets. Besides, it of terms/words stored in our document repository. We have explains Relative Distance Plane (RDP) and Silhouette concentrated on discovery of appropriate similarity Coefficient. We propose the basic dimension retrieval measurements and constructions of data driven dimension. It algorithm in section III. As measures of proximity are benefits quality of generated dimensions and provides a clear important for our work, we portray some analysis on view of the whole data repository in low dimensional context driven space. proximity measures in section IV. We present the experimental results with the dimension retrieval algorithm in I. INTRODUCTION section V using a small document set containing 5 text documents and conclude in section VI. Finally, the appendix IGH dimensionality of data limits the choice of data contains a simple example clarifying the proposed dimension mining techniques in many applications. Complex data H retrieval algorithm. analysis and mining on huge amounts of data can take a long time, making data analysis impractical and infeasible [26]. II. BACKGROUND AND RELATED WORKS This is the reason why different dimensionality reduction techniques are developed so that data mining tasks become In this section we describe WordNet ontology and different more convenient, fast and understandable to human being. measures of words’ proximity. The reason why we focus on Typically, a large number of words exist in even a proximity measures is that similarity measurement is the core moderately sized set of documents resulting in high of our dimension retrieval technique. Later we show that the dimensional text repository [27]. As a result, many mining change in proximity measure can significantly change the organization of descriptive data. applications for text data generate impractical and infeasible WordNet divides its whole lexical reference system into results. In this paper, we propose a technique to generate five categories: nouns, verbs, adjectives, adverbs, and sense-based dimensions reflecting broad context of words in function words [1]. Function words are basically non-content a document repository. In our approach, we form a dynamic words like prepositions, conjunctions, etc. that may mislead method which is document set-based. Our aim is to construct language processing tasks as they are non-informative. In our the dimensions depending on the terms/keywords in the set of work, we have concentrated on nouns, their senses, synsets documents. We utilized WordNet [1] ontology as background and coordinate terms only to present our approach in a knowledge to retrieve senses of terms/keywords. straightforward manner. In WordNet, synsets are usually The goal of this work is to utilize ontology as a sense based connected to other synsets via a number of semantic relations. representation mechanism in multiple dimensions so that the These relations vary based on the type of word. For example, representation of linguistic senses becomes apparent in text nouns have five kinds of relations, which are stated below: repository. Besides, we focus on similarity measurements (1) hypernyms: Y is a hypernym of X if every X is a kind of between words and synstets. A synset is a set of synonyms of Y, a word which provides a broader meaning in sense domain. (2) hyponyms: Y is a hyponym of X if every Y is a kind of X, There are different methods to find out the similarity between (3) coordinate terms: Y is a coordinate term of X if X and Y two words or synsets. Some similarity measurements are share a hypernym, corpus dependent while others are corpus independent. We (4) holonym: Y is a holonym of X if X is a part of Y, (5) meronym: Y is a meronym of X if Y is a part of X. M. S. Hossain is a graduate student of Department of Computer Science, In WordNet, only adjectives and adverbs are organized as Montana State University, Bozeman, MT 59717, USA. (phone: N-dimensional hyperspaces, but nouns and verbs are 1-406-209-7103; fax: 1-406-994-4376; e-mail: [email protected]). organized in lexical memory as hierarchies. We shall take the M. Akbar is also a graduate student of the same department. (e-mail: [email protected]). advantage of these hierarchies to retrieve sense and organize R. A. Angryk is a faculty with the Department of Computer Science, concepts in sense based multi-dimensional space. Montana State University, Bozeman, MT 59717, USA. (e-mail: Sense retrieval can be discussed in the context of Word [email protected]). Sense Disambiguation (WSD) [3]. In computational [7] describes a fundamental distinction between the nearest linguistics, WSD is a problem of determining which sense of neighbor cluster distance measure, Min, and the furthest a word is used in a given sentence, having a number of neighbor measure, Max where the first favors the merging of distinct senses to choose from. For example, consider the large clusters while the latter favors the merging of smaller word bass, two distinct senses of which are: (1) a type of fish clusters. However, whenever using any kind of clustering, the and (2) tones of low frequency. Now take under consideration measurement of proximity becomes a concern. The two sentences: “The bass part of the song is very moving”, measurement of proximity can be either a geometric distance and “I went fishing for some sea bass”. To a human it is or a similarity relation defined between terms/concepts. obvious that the first sentence is using the word bass in sense Hence our proximity is of “similarity type” where the larger 2 above, and in the second sentence it is being used in sense 1. the similarity value of two observations e.g., xi and xj (letting Although this seems obvious to a human, developing the data repository to be defined as X={x1, x2, ….,xn}), the algorithms to replicate this human ability is a difficult task. closer they are. If the similarity denoted by Sim(xi , xj) is equal Some interesting works on WSD and mechanism to to 1, then xi and xj are same, what in context of distance-based disambiguate senses from context have been already measurements can be interpreted as Dis(xi , xj)=0. Distances published. Stevenson et al. [4] describe solution of WSD between synsets are derived from their similarities using the problem with the interaction of knowledge sources. Their formula, distance = (1.0 – similarity). We use dissimilarity work attempts improvement of disambiguation by interacting and distance conveying the same meaning. several knowledge sources when implementing a sense In this work we use Hierarchical Agglomerative Clustering tagger. The system moves to alternative knowledge source if (HAC) for discovering the number of dimensions (or senses) the sense of a word is not retrieved from one source with from a group of synsets. So we need to analyze proximity satisfactory confidence. The authors report about the measures for clusters as well as synsets. accuracy exceeding 94% on their evaluation corpus, which If C1 and C2 are two different clusters, we would indicate shows that the approach is robust. The approach may the similarity between the two clusters as Sim (C1, C2). We however need significant number of knowledge sources, refer to this interclass similarity. Assume that at some point of which from our perspective, has been somehow clustering process we have q clusters, where Ck denotes set of overwhelming. As we have chosen to use only WordNet k clusters, with k=1 to q. If merging is essential, clusters Ci‘ and C can be selected such that i′ ≠ j′ and ontology, we wanted disambiguation process using the same j‘ Sim(C ,C )=Max (Sim(C ,C )), where C are the knowledge archive rather than using several knowledge i′ j′ ij i j i clusters/concepts inside C and C are clusters inside C . Two sources. i′ j j′ methods, which have been often used for calculating the Montoyo et al. [5] present such a method for distance between two clusters are the nearest and furthest disambiguation of nouns in English texts that uses the notion neighbor rules [7]. The nearest neighbor rule defines the of Specification Marks and employ the noun taxonomy of the inter-cluster similarity as the similarity between the elements WordNet lexical knowledge base. The method resolves the in each of the two clusters that are biggest: lexical ambiguity of nouns in any sort of text. It relies only on Sim( C, C )= MAX ( Sim( c, c )) ( 1) the semantic relations (hypernymy and hyponymy) and the 12 12ij c11∈∈ C and c 2 C 2 hierarchic organization of WordNet. The method does not, ij however, require any sort of training process, no hand-coding The furthest neighbor rule defines the inter-cluster of lexical entries, nor the hand-tagging of texts. The intuition similarity as the similarity between the elements of the two underlying this approach is that the more similar two words clusters that are smallest: are, the more informative the most specific concept that Sim( C12 ,C )= MIN ( Sim( c 12, c )) ( 2 ) cCandcC∈∈ ij subsumes them both, will be.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-