Unsupervised Sparse Vector Densification for Short Text Similarity

Unsupervised Sparse Vector Densification for Short Text Similarity

Unsupervised Sparse Vector Densification for Short Text Similarity Yangqiu Song and Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801, USA yqsong,danr @illinois.edu { } Abstract 2007). Instead of using only the words in a doc- ument, ESA uses a bag-of-concepts retrieved from Sparse representations of text such as bag-of- Wikipedia to represent the text. Then the similarity words models or extended explicit semantic between two texts can be computed in this enriched analysis (ESA) representations are commonly concept space. used in many NLP applications. However, for short texts, the similarity between two such s- Both bag-of-words and bag-of-concepts model- parse vectors is not accurate due to the small s suffer from the sparsity problem. Because both term overlap. While there have been multiple models use sparse vectors to represent text, when proposals for dense representations of words, comparing two pieces of texts, the similarity can be measuring similarity between short texts (sen- zero even when the text snippets are highly related, tences, snippets, paragraphs) requires combin- but make use of different vocabulary. We can expect ing these token level similarities. In this paper, that these two texts are related but the similarity val- we propose to combine ESA representations and word2vec representations as a way to gen- ue does not reflect that. ESA, despite augmenting erate denser representations and, consequent- the lexical space with relevant Wikipedia concepts, ly, a better similarity measure between short still suffers from the sparsity problem. We illustrate texts. We study three densification mecha- this problem with the following simple experiment, nisms that involve aligning sparse representa- done by choosing a documents from the “rec.autos” tion via many-to-many, many-to-one, and one- group in the 20-newsgroups data set1. For both doc- to-one mappings. We then show the effective- uments and the label description “cars” (here we fol- ness of these mechanisms on measuring simi- low the description shown in (Chang et al., 2008; larity between short texts. Song and Roth, 2014)), we computed 500 concepts using ESA. Then we identified the concepts that ap- 1 Introduction pear both in the document ESA representation and in the label ESA representation. The average sizes Bag-of-words model has been used for many ap- of this intersection (number of overlapping concepts plications as the state-of-the-art method for tasks in the document and label representation) are shown such as document classifications and information re- in Table 1. In addition to the original documents, we trieval. It represents each text as a bag-of-words, also split each document into 2, 4, 8, 16 equal length and computes the similarity, e.g., cosine value, be- parts, computed the ESA representation of each, and tween two sparse vectors in the high-dimensional then the intersection with the ESA representation of space. When the contextual information is insuffi- the label. Table 1 shows that the number of concepts cient, e.g., due to the short length of the document, shared by the label and the document representation explicit semantic analysis (ESA) has been used as decreases significantly, even if not as significantly a way to enrich the text representation (Gabrilovich and Markovitch, 2006; Gabrilovich and Markovitch, 1http://qwone.com/˜jason/20Newsgroups/ 1275 Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1275–1280, Denver, Colorado, May 31 – June 5, 2015. c 2015 Association for Computational Linguistics Table 1: Average sizes of the intersection between the x and y together to increase the similarity value. ESA concept representations of documents and label- We can rewrite the vectors x and y as x = s. Both documents and label are represented with 500 xa , . , xa and y = yb , . , yb , where ai Wikipedia concepts. Documents are split into different { 1 nx } { 1 ny } and b are indices of the non-zero terms in x and y lengths. j (1 a , b V ). x and y are the weights asso- ≤ i j ≤ ai bi # of split Avg. # of words per doc. Avg. # of concepts ciated to the terms in the vocabulary. Suppose there 1 209.6 23.1 2 104.8 18.1 are non-zero terms nx and ny in x and y respective- 4 52.4 13.8 ly. Then cosine similarity can be rewritten as: 8 26.2 10.6 nx ny 16 13.1 8.4 δ(ai bj)xa yb cos(x, y) = i=1 j=1 − i j , (1) x y P P || || · || || as the drop in the document size. For example, there where δ( ) is the Dirac function δ(0) = 1 and · are on average 8 concepts in the intersection of two δ(other) = 0. Suppose we can compute the simi- vectors with 500 non-zero concepts when we split larity between terms ai and bj, which is denoted as each document into 16 parts. φ(ai, bj), then the problem is how to aggregate the When there are fewer overlapping terms between similarities between all ai’s and bj’s to augment the two pieces of texts, it can cause mismatch or biased original cosine similarity. match and result in less accurate comparison. In this paper, we propose to use unsupervised approaches 2.1 Similarity Augmentation to improve the representation, along with a corre- The most intuitive way to integrate the similarities sponding similarity approach between these repre- between terms is averaging them: sentations. Our contribution is twofold. First, we n ny incorporate the popular word2vec (Mikolov et al., 1 x 2013a; Mikolov et al., 2013b) representations into SA(x, y) = xai ybj φ(ai, bj). nx x ny y ESA representation, and show that incorporating se- || || · || || i=1 j=1 X X (2) mantic relatedness between Wikipedia titles can in- This similarity averages all the pairwise similarities deed help the similarity measure between short texts. between terms a ’s and b ’s. However, we can ex- Second, we propose and evaluate three mechanism- i j pect a lot of the similarities φ(a , b ) to be close to s for comparing the resulting representations. We i j zero. In this case, instead of introducing the relat- verify the superiority of the proposed methods using edness between nonidentical terms, it will also in- three different NLP tasks. troduce noise. Therefore, we also consider an align- 2 Sparse Vector Densification ment mechanism that we implement greedily via a maximum matching mechanism: In this section, we introduce a way to compute n the similarity between two sparse vectors by aug- 1 x SM (x, y) = xa yb max φ(ai, bj). menting the original similarity measure, i.e., co- x y i j j || || · || || i=1 sine similarity. Suppose we have two vectors x = X (3) T T (x1, . , xV ) and y = (y1, . , yV ) where V is j argmax φ(a , b ) We choose as j0 i j0 and substitute the vocabulary size. Traditional cosine similarity the similarity φ(ai, bj) between terms ai and bj in- computes the dot product between these two vec- to the final similarity between x and y. Note that tors and normalizes it by their norms: cos(x, y) = this similarity is not symmetric. Thus, if one needs xT y x y . This requires each dimension of x to be a symmetric similarity, the similarity can be com- || ||·|| || aligned with the same dimension of y. Note that puted by averaging two similarities SM (x, y) and for sparse vectors x and y, most of the the elements SM (y, x). can be zero. Aligning the indices can result in zero The above two similarity measurements are sim- similarity even though the two pieces of texts are re- ple and intuitive. We can think about SA(x, y) lated. Thus, we propose to align different indices of as leveraging term many-to-many mapping, while 1276 (a) rec.autos vs. sci.electronics (full doc.) (b) rec.autos vs. sci.electronics (1/16 doc.) (c) rec.autos vs. rec.motorcycles (full doc.) (d) rec.autos vs. rec.motorcycles (1/16 doc.) Figure 1: Accuracy of dataless classification using ESA and Dense-ESA with different numbers of concepts. SM (x, y) uses only one-to-many term mapping. as five). For each word, we finally obtained a 200 SA(x, y) can introduce small and noisy similarity dimensional vector. If the term is a phrase, we sim- values between terms. While SM (x, y) essentially ply average words’ vectors of each phrase to obtain aligns each term in x with it’s best match in y, we the representation following the original word2vec run the risk that multiple components of x will se- approach (Mikolov et al., 2013a; Mikolov et al., lect the same element in y. To ensure that all the 2013b). We use two vectors a and b to represent the non-zero terms in x and y are matched, we propose vectors for the two terms. To evaluate the similarity to constrain this metric by disallowing many-to-one between two terms, for the average approach as E- mapping. We do that by using a similarity metric q. (2), we use the RBF kernel over the two vectors based on the Hungarian method (Papadimitriou and exp a b 2/(0.03 a b ) as the similari- {−|| − || ·|| ||·|| || } Steiglitz, 1982). The Hungarian method is a combi- ty for all the experiments, since this will have a good natorial optimization algorithm that solves the bipar- property to cut the terms with small similarities. For tite graph matching problem by finding an optimal the max and Hungarian approach as Eqs.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us