Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model

Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model

Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model Grigori Sidorov1, Alexander Gelbukh1, Helena Gomez-Adorno1, and David Pinto2 1 Centro de Investigacion en Computacion, Instituto Politecnico Nacional, Mexico D.F., Mexico 2 Facultad de Ciencias de la Computacion, Benemerita Universidad Autonoma de Puebla, Puebla, Mexico {sidorov,gelbukh}@cic.ipn.mx, [email protected], [email protected] Abstract. We show how to consider similarity between 1 Introduction features for calculation of similarity of objects in the Vec­ tor Space Model (VSM) for machine learning algorithms Computation of similarity of specific objects is a basic and other classes of methods that involve similarity be­ task of many methods applied in various problems in tween objects. Unlike LSA, we assume that similarity natural language processing and many other fields. In between features is known (say, from a synonym dictio­ natural language processing, text similarity plays crucial nary) and does not need to be learned from the data. role in many tasks from plagiarism detection [18] and We call the proposed similarity measure soft similarity. question answering [3] to sentiment analysis [14-16]. Similarity between features is common, for example, in The most common manner to represent objects is natural language processing: words, n-grams, or syn­ the Vector Space Model (VSM) [17]. In this model, the tactic n-grams can be somewhat different (which makes objects are represented as vectors of values of features. them different features) but still have much in common: The features characterize each object and have numeric for example, words “play” and “game” are different but values. If by their nature the features have symbolic related. When there is no similarity between features values, then they are mapped to numeric values in some then our soft similarity measure is equal to the standard manner. Each feature corresponds to a dimension in similarity. For this, we generalize the well-known cosine the VSM. Construction of VSM is in a way subjective, similarity measure in VSM by introducing what we call because we decide which features should be used and “soft cosine measure”. We propose various formulas what scales their values should have. Nevertheless, for exact or approximate calculation of the soft cosine once constructed, the calculation of similarity of the vec­ measure. For example, in one of them we consider tors is exact and automatic. VSM allows comparison of for VSM a new feature space consisting of pairs of any types of objects represented as values of features. the original features weighted by their similarity. Again, VSM is especially actively used for representing objects for features that bear no similarity to each other, our in machine learning methods. formulas reduce to the standard cosine measure. Our In the field of natural language processing, the objects experiments show that our soft cosine measure provides usually are various types of texts. The most widely better performance in our case study: entrance exams used features are words and n-grams. In particular, re­ question answering task at CLEF. In these experiments, cently we have proposed a concept of syntactic n-grams, we use syntactic n-grams as features and Levenshtein i.e., n-grams constructed by following paths in syntactic distance as the similarity between n-grams, measured trees [19,21]. These n-grams allow taking into account either in characters or in elements of n-grams. syntactic information for VSM representation (and, thus, for use with machine learning algorithms as well). There are various types of n-grams and syntactic n-grams ac­ Keywords. Soft similarity, soft cosine measure, vector cording to types of elements they are built of: lexical units space model, similarity between features, Levenshtein (words, stems, lemmas), POS tags, SR tags (names of distance, n-grams, syntactic n-grams. syntactic relations), characters, etc. Depending on the Computacion y Sistemas Vol. 18, No. 3, 2014 pp. 491-504 ISSN 1405-5546 DOI: 10.13053/CyS-18-3-2043 492 Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, David Pinto task, there are also mixed syntactic n-grams that are semantically. This also can be interpreted in information- combinations of various types of elements, for example, theoretic way (when one speaks of playing, speaking ones that include names of syntactic relations [20]. The of a game is less surprising) or in probabilistic way values of the features are usually some variants of the (conditional probability of the word “game” increases in well-known tf-idf measure. the context of “play”). That is, these dimensions are not In Vector Space Model, traditional cosine mea­ independent. Similarity between words is very important sure [17] is commonly used to determine the similarity in many applications of natural language processing and between two objects represented as vectors. The co­ information retrieval. sine is calculated as normalized dot-product of the two In this paper, we propose considering such similarity vectors. The normalization is usually Euclidean, i.e., the of features in VSM, which allows generalization of the value is normalized to vectors of unit Euclidean length. concepts of cosine measure and similarity. We describe For positive values, the cosine is in the range from 0 to 1. our experiments that show that the measure that takes Given two vectors a and b, the cosine similarity measure into account similarity between features yields better between them is calculated as follows: the dot product is results for a question answering task we worked with. calculated as Some methods, such as LSA, can learn the similarity between features from the data. In contrast, we assume a ■ b = ^ 2 at bi (1) that similarity between features is known—say, from a synonym dictionary—and does not need to be learned from the data. Thus our method can be used even when the norm is defined as there is no sufficient data to learn the similarity between ||x|| = y fx ~ x , (2) features from statistics. The rest of this paper is organized as follows. Sec­ and then the cosine similarity measure is defined as tion 2 introduces the soft cosine measure and the idea a b of the soft similarity. Section 3 describes the question cosine(a, b) (3) answering task for entrance exams at CLEF and the ||a|| x method that we applied in it. Section 4 presents appli­ which given (1) and (2) becomes cation of the soft cosine similarity (the experiments) and discussion of the results. Section 5 concludes the paper. 1 ai bi cosine (a, b) = (4) EN =1 a h / E N=1 b 2 Soft Similarity and Soft Cosine Applied to a pair of N-dimensional vectors, this for­ Measure mula has both time and memory complexity O(N ). In a similar way, the same VSM is used by machine Consider an example of using words as features in learning algorithms. They are applied, for example, a Vector Space Model. Suppose that we have two for grouping, separation, or classification of objects by texts: (1) play, game, (2) player, gamer. It defines learning weights of features or by choosing most signifi­ a 4-dimensional VSM with the following features: play cant features. player, game, gamer. We have two vectors a and b: Traditional cosine measure and traditional similarity a = [1,0,1,0] and b = [0, 1, 0, 1]. The traditional cosine measures consider VSM features as independent or in similarity of these two vectors is 0. But if we take into some sense completely different; mathematically speak­ consideration the similarity of words, it turned out that ing, the formula (1) considers the vectors in the VSM as these vectors are quite similar. There is special proce­ expressed in an orthonormal basis. In some applications dure called ‘stemming’ in natural language processing this is a reasonable approximation, but far too often it is aimed to take into account this kind of similarity between not so. words, but it is a specific ad hoc procedure. A more For example, in the field of natural language pro­ general question is: how can we take into account the cessing the similarity between features is quite intuitive. similarity between features in Vector Space Model? The Indeed, words or n-grams can be quite similar, though traditional similarity does not consider this question, i.e., different enough to be considered as different features. all features are considered different. For example, words “play” and “game” are of course The cosine measure is widely applied and usually is different words and thus should be mapped to different taken for granted. We found two papers that suggest dimensions in SVM; yet it is obvious that they are related its modification. In [7] the authors claim that the cosine Computacion y Sistemas Vol. 18, No. 3, 2014 pp. 491-504 ISSN 1405-5546 DOI: 10.13053/CyS-18-3-2043 Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model 493 similarity is overly biased by features with higher values taking advantage of the fact that they are usually strings and does not care much about how many features two in case of natural language processing. vectors share, i.e., how many values are zeroes. They propose a modification of the cosine measure called the Recall that the Levenshtein distance is the number distance weighted cosine measure (dw-cosine). The of operations (insertions, deletions, rearrangements) dw-cosine is calculated by averaging the cosine similarity needed to convert a string into another sting. In our with the Hamming distance. The Hamming distance case, the Levenshtein distance is a good measure for counts how many features two vectors do not share. In string comparison, but other measures can be exploited this way, they try to decrease the similarity value of the as well.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us