
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING XX, XXXX-XXXX (2016) Revisiting Supervised Word Embeddings DIEU VU1, KHANG TRUONG, KHANH NGUYEN, LINH NGO and KHOAT THAN2;3 School of Information and Communication Technology Hanoi University of Science and Technology No. 1, Dai Co Viet Road, Hanoi, Vietnam E-mail: [email protected] [email protected] 3Corresponding author Word embeddings are playing a crucial role in a variety of applications. However, most previous works focus on word embeddings which are either non-discriminative or hardly interpretable. In this work, we investigate a novel approach, referred to as SWET, which learns supervised word embeddings using topic models from labeled corpora. SWET inher- its the interpretability of topic models, the discriminativeness of supervised inference from labels. More importantly, SWET enables us to directly exploit a large class of existing un- supervised and supervised topic models to learn supervised word embeddings. Extensive experiments show that SWET outperforms unsupervised approaches by a large margin, and are highly competitive with supervised baselines. Keywords: Supervised word embeddings, Topic models. 1. Introduction Word embeddings refer to vector representations of words that can capture their meanings so that those vectors can be used for diverse NLP tasks [1]. Recently, word embeddings have been playing an important role in a variety of applications [2], such as text classification, language modeling, named entity recognition, parsing, and tagging, etc. Word2Vec [3] and Glove [4] are two of the most well-known methods that can learn effectively word embeddings from large-scale corpora. While there is an enormous literature on unsupervised learning for word embeddings, there are few approaches on supervised word embeddings (SWE) that have capacity for encoding the supervision from labeled data. Unsupervised word embeddings are often non-discriminative and therefore, undesirable for supervised tasks [5, 2, 6]. A large class of embedding methods based on deep neural networks [7, 3] can model the local con- text of a word well. Remarkably, some recent approaches such as ELMO [8], BERT [9] take advantage of contextual information to learn word embeddings which are extremely rich in semantic knowledge. Nevertheless, those methods require large computational resource. In fact, they are difficult and complex to implement on devices of low capac- ity. Furthermore, those methods often produce continuous vectors that hardly support interpretability. [10]. Interpretability of model is crucial in various practical applications [11, 12]. To ob- tain interpretable word embeddings, some methods use sparsity constraints [13, 14] and rotation techniques [15, 16]. Aother work [17] exploits informative priors to create inter- pretable and domain-informed dimensions for probabilistic word embeddings. Recently, Word2Sense [18] extends topic models to refine the representation of a polysemous word 1 2 DIEU VU,KHANG TRUONG,KHANH NGUYEN,LINH NGO VAN AND KHOAT THAN in a short context. Besides, many studies proposed to combine the benefits of topic mod- els [19] and deep neural networks [20, 21, 22, 23, 24]. Although those approaches can target interpretability, they ignore discriminativeness. There are few efforts to develop methods that can learn discriminative embeddings. L-SVD [2] was proposed to encode labels into the co-occurrence matrix of terms and then used SVD or Glove to learn discriminative embeddings. Besides, another proposal [25] considered each word has many embeddings, each of which associates with a class la- bel. Other works [6, 26] tried to fine-tune the universal embeddings for specific tasks and achieved promising results. Recently, LEAM [27] jointly learns label and word embed- dings in the same latent space. This framework uses the text-label compatibility to learn an attentive model for text representation. All of those approaches succeed in capturing the supervision from labels, but lack interpretability. In this work, we are interested in learning word embeddings which are both discrim- inative and interpretable. Moreover, the embeddings should be easy and light to train and test. Therefore, our contributions are as follows: • We propose SWET, which can learn supervised word embeddings using topic mod- els from labeled corpora. SWET inherits not only the interpretability of topic mod- els but also the discriminativeness of supervised inference from labels. More im- portantly, SWET enables us to directly exploit a large class of existing unsupervised [28, 29, 30], supervised [31, 32, 33, 34, 35, 36, 37, 38, 39, 40] topic models to learn supervised word embeddings. This property is really beneficial in practice. • We provide a theoretical analysis which shows the rationale behind SWET. • We did an extensive experiments to evaluate SWET and compare with various base- lines. We find that SWET outperforms unsupervised approaches by a large margin, and are highly competitive with supervised state-of-the-art baselines. The remainder of this paper is organized as follows: Section 2 presents some back- grounds. In Section 3, we present SWET, instantiate its application to some classes of topic models, and present the rationale behind SWET. The experiments and evaluation are presented in Section 4. Section 5 discusses future work and concludes the paper. 2. Background 2.1 Topic Models Consider a corpus D consisting of M documents and a vocabulary of V terms. A topic model assumes a corpus is composed from K topics b, and each topic b k = (bk1;:::;bkV ) is a probability distribution on the vocabulary of K terms, meaning that ∑ j bk j = 1 and bk j ≥ 0 for any k; j. Each document d = (d1, ..., dV )(d j is a count of term w j in document d) is a mixture of those K topics. Each vector qd = (qd1;:::;qdK) represents the topic proportion in document d, such that ∑k qdk = 1 and qdk ≥ 0 for any k. The target of learn- ing a topic model is often to discover the hidden structures (b;q1;:::;qD) from the given corpus. While b shows popular topics in the corpus, qd tells the importance/proportion of each topic in document d. Probabilistic latent semantic analysis [29] and latent Dirichlet allocation (LDA) [28] are popular topic models. When K needs to be pre-specified by users, those models are parametric. REVISITING SUPERVISED WORD EMBEDDINGS 3 Algorithm 1 Two-phase SDR Phase 1: Learn an unsupervised model to get K topics as an initialization: b1;··· ;bK: Phase 2: (finding discriminative space) 1. for each class c, select a set Sc of topics which are discriminative for c. 2. for each document d in class c, select a set Nd of its nearest neighbors in c. ∗ 3. infer new representation qd for each document d in class c by using the Frank- Wolfe algorithm [34] to maximize sum of log likelihood of document d and its neighbors in Nd ∗ ∗ 4. compute new topics as: bk j ∝ ∑d2D d jqdk. ∗ ∗ Finally, W∗ = spanfb1 ;:::;bKg is the discriminative space. 2.2 Supervised Topic Models Supervised topic models [35, 33, 31] aim to incorporate side information such as class labels into topic models. Label information is injected into the topical space and makes the space more discriminative for each class. Those models are effective for super- vised tasks such as document classification or regression. Supervised LDA (sLDA) [35] assigns each document to a response variable. Gener- ative process for each document d of length N is described as below: 1. Draw topic proportion q ∼ Dir(a). 2. For the n-th word in d: draw topic assignment zn ∼ Mult(q), then draw word wn ∼ Mult(bzn ) 1 N 3. Draw class label y ∼ so ftmax( N ∑n=1 zn;h) A difference between sLDA and LDA is y, which is an observed variable repre- senting for the label of a document. Inference of sLDA also uses variational methods to approximate posterior distribution given a pair of document and label. Nevertheless, sLDA needs much memory for all parameters and exorbitant computations. Another effective framework [34] called supervised dimension reduction (SDR) suc- ceeds in incorporating labels into an unsupervised topic model to find a low-dimensional representation for documents. The framework is briefly described in Algorithm 1. The SDR framework learns a new space b ∗ encoding three features: label, document manifold and the semantic topics initialized in Phase 1. The label data and document manifold are utilized in Phase 2 to learn a low-dimensional topical space which is discriminative. 3. Supervised word embeddings with topic models (SWET) In this section, we present SWET for learning supervised word embeddings, which are interpretable and discriminative. We will also explain why our approach is reasonable and discuss some of its key properties. 3.1 Method SWET contains two steps: • Step 1: Learn a supervised topic model to obtain topics b ∗ of size K x V, where K is the number of topics and V is the vocabulary size. 4 DIEU VU,KHANG TRUONG,KHANH NGUYEN,LINH NGO VAN AND KHOAT THAN • Step 2: Form the embedding of word j by taking the jth column of b ∗ and then normalizing it by a normalization method, such as L1;L2, so ftmax. Next we discuss two different approaches to obtaining topics in Step 1. 3.1.1 Supervised approach Various supervised topic models [31, 32, 33, 34, 35, 36, 37, 38, 39, 40] can be used in Step 1 of SWET. Note that SWET can be applied in a variety of situations because the side information of documents may be categories, tags, ratings, etc. Word embeddings, learned in those cases, can be applied to classification problems or recommendation systems. 3.1.2 SDR-based approach Supervised dimension reduction (SDR) [34] is the simple framework that boosts un- supervised topic models to work well with supervised tasks. It exploits the local structure of each class and the document manifold to learn a discriminative topical space.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-