Sparse Latent Semantic Analysis

Sparse Latent Semantic Analysis

Sparse Latent Semantic Analysis Xi Chen∗ Yanjun Qi † Bing Bai† Qihang Lin ‡ Jaime G. Carbonell§ Abstract embeddings into the latent space. S is a diagonal ma- Latent semantic analysis (LSA), as one of the most pop- trix with the D largest singular values of X on the di- 1 ular unsupervised dimension reduction tools, has a wide agonal . Subsequently, the so-called projection matrix −1 T range of applications in text mining and information re- defined as A = S V provides a transformation map- trieval. The key idea of LSA is to learn a projection ping of documents from the word space to the latent matrix that maps the high dimensional vector space topic space, which is less noisy and considers word syn- representations of documents to a lower dimensional la- onymy (i.e. different words describing the same idea). tent space, i.e. so called latent topic space. In this pa- However, in LSA, each latent topic is represented by per, we propose a new model called Sparse LSA, which all word features which sometimes makes it difficult to precisely characterize the topic-word relationships. produces a sparse projection matrix via the `1 regu- larization. Compared to the traditional LSA, Sparse In this paper, we introduce a scalable latent topic LSA selects only a small number of relevant words for model that we call “Sparse Latent Semantic Analysis” each topic and hence provides a compact representation (Sparse LSA). Different from the traditional LSA based of topic-word relationships. Moreover, Sparse LSA is on SVD, we formulate a variant of LSA as an optimiza- computationally very efficient with much less memory tion problem which minimizes the approximation error usage for storing the projection matrix. Furthermore, under the orthogonality constraint of U. Based on this we propose two important extensions of Sparse LSA: formulation, we add the sparsity constraint of the pro- group structured Sparse LSA and non-negative Sparse jection matrix A via the `1 regularization as in the lasso LSA. We conduct experiments on several benchmark model [23]. By enforcing the sparsity on A, the model datasets and compare Sparse LSA and its extensions has the ability to automatically select the most relevant with several widely used methods, e.g. LSA, Sparse words for each latent topic. There are several important Coding and LDA. Empirical results suggest that Sparse features of Sparse LSA model: LSA achieves similar performance gains to LSA, but is 1. It is intuitive that only a part of the vocabulary can more efficient in projection computation, storage, and be relevant to a certain topic. By enforcing sparsity also well explain the topic-word relationships. of A such that each row (representing a latent topic) only has a small number nonzero entries 1 Introduction (representing the most relevant words), Sparse LSA Latent Semantic Analysis (LSA) [5], as one of the most can provide us a compact representation for topic- successful tools for learning the concepts or latent topics word relationship that is easier to interpret. from text, has widely been used for the dimension reduc- tion purpose in information retrieval. More precisely, 2. With the adjustment of sparsity level in projection given a document-term matrix X ∈ RN×M , where N matrix, we could control the granularity (“level-of- is the number of documents and M is the number of details”) of the topics we are trying to discover, words, and assuming that the number of latent top- e.g. more generic topics have more nonzero entries ics (the dimensionality of the latent space) is set as in rows of A than specific topics. D (D ≤ min{N, M}), LSA applies singular value de- 3. Due to the sparsity of A, Sparse LSA provides composition (SVD) to construct a low rank (with rank- an efficient strategy both in the time cost of the D) approximation of X: X ≈ USVT , where the col- projection operation and in the storage cost of the umn orthogonal matrices U ∈ RN×D (UT U = I) and projection matrix when the dimensionality of latent V ∈ RM×D (VT V = I) represent document and word space D is large. ∗Machine Learning Department, Carnegie Mellon University 1Since it is easier to explain our Sparse LSA model in terms of †NEC Lab America document-term matrix, for the purpose of consistency, we intro- ‡Tepper School of Business, Carnegie Mellon University duce SVD based on the document-term matrix which is different §Language Technology Institute, Carnegie Mellon University from standard notations using the term-document matrix. 4. Sparse LSA could project a document q into a Motivated by the latent factor analysis [9], we sparse vector representation qb where each entry of assume that we have D uncorrelated latent variables N qb corresponds to a latent topic. In other words, U1,...,UD, where each Ud ∈ R has the unit length, we could know the topics that q belongs to directly i.e. kUdk2 = 1. Here k · k2 denotes the vector `2-norm. form the position of nonzero entries of qb. Moreover, For the notation simplicity, we put latent variables N×D sparse representation of projected documents will U1,...,UD into a matrix: U = [U1,...,UD] ∈ R . save a lot of computational cost for the subsequent Since latent variables are uncorrelated with the unit retrieval tasks, e.g. ranking (considering computing length, we have UT U = I, where I is the identity cosine similarity), text categorization, etc. matrix. We also assume that each feature vector Xj can be represented as a linear expansion in latent variables Furthermore, we propose two important extensions U1,...,UD: based on Sparse LSA: XD 1. Group Structured Sparse LSA: we add group struc- (2.1) Xj = adjUd + ²j, tured sparsity-inducing penalty as in [24] to select d=1 D×M the most relevant groups of features relevant to the or simply X = UA + ², where A = [adj] ∈ R latent topic. gives the mapping from the latent space to the input feature space and ² is the zero mean noise. Our goal is 2. Non-negative Sparse LSA: we further enforce the to compute the so-called projection matrix A. non-negativity constraint on the projection matrix We can achieve this by solving the following opti- A. It could provide us a pseudo probability mization problem which minimizes the rank-D approx- distribution of each word given the topic, similar imation error subject to the orthogonality constraint of as in Latent Dirichlet Allocation (LDA) [3]. U: We conduct experiments on four benchmark data 1 (2.2) min kX − UAk2 sets, with two on text categorization, one on breast U,A 2 F cancer gene function identification, and the last one subject to: UT U = I, on topic-word relationship identification from NIPS proceeding papers. We compare Sparse LSA and its where k · kF denotes the matrix Frobenius norm. The variants with several popular methods, e.g. LSA [5], constraint UT U = I is according to the uncorrelated Sparse Coding [16] and LDA [3]. Empirical results property among latent variables. show clear advantages of our methods in terms of At the optimum of Eq. (2.2), UA leads to the best computational cost, storage and the ability to generate rank-D approximation of the data X. In general, larger sensible topics and to select relevant words (or genes) the D is, the better the reconstruction performance. for the latent topics. However, larger D requires more computational cost The rest of this paper is as follows. In Section 2, we and large amount memory for storing A. This is the present the basic Sparse LSA model. In Section 3, we issue that we will address in the next section. extend Sparse LSA to group structured Sparse LSA and After obtaining A, given a new document q ∈ RM , non-negative Sparse LSA. Related work is discussed in its representation in the lower dimensional latent space Section 4 and the empirical evaluation of the models is can be computed as: in Section 5. We conclude the paper in Section 6. (2.3) qb = Aq. 2 Sparse LSA 2.2 Sparse LSA As discussed in the introduction, 2.1 Optimization Formulation of LSA We con- one notable advantage of sparse LSA is due to its sider N documents, where each document lies in an M- good interpretability in topic-word relationship. Sparse dimensional feature space X , e.g. tf-idf [1] weights of the LSA automatically selects the most relevant words for vocabulary with the normalization to unit length. We each latent topic and hence provides us a clear and denote N documents by a matrix X = [X1,...,XM ] ∈ compact representation of the topic-word relationship. N×M N R , where Xj ∈ R is the j-th feature vector for all Moreover, for a new document q, if the words in q has the documents. For the dimension reduction purpose, no intersection with the relevant words of d-th topic we aim to derive a mapping that projects input fea- (nonzero entries in Ad, the d-th row of A), the d-th ture space into a D-dimensional latent space where D element of qb, Adq, will become zero. In other words, is smaller than M. In the information retrieval content, the sparse latent representation of qb clearly indicates each latent dimension is also called an hidden “topic”. the topics that q belongs to. Another benefit of learning sparse A is to save com- putational cost and storage requirements when D is large. In traditional LSA, the topics with larger sin- gular values will cover a broader range of concepts than the ones with smaller singular values. For example, the (a) first few topics with largest singular values are often too general to have specific meanings.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us