
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model Pengfei Liu, Xipeng Qiu∗ and Xuanjing Huang Shanghai Key Laboratory of Data Science, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China fpfliu14,xpqiu,[email protected] Abstract Distributed word representations have a rising in- terest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multi- prototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent top- ics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simulta- neously. The experiments on the word similarity and text classification tasks show our model out- performs state-of-the-art methods. Figure 1: Skip-Gram, TWE-1 and our model(NTSG). The red, yellow and green circles indicate the embeddings of word, topic and the context respectively. 1 Introduction Distributed word representations, also commonly called approximately the average of its different contextual seman- word embeddings, are to represent words by dense, low- tics relating to finance or placement. dimensional and real-valued vectors. Each dimension of the embedding represents a latent feature of the word, hope- To address this problem, some models [Reisinger and fully capturing useful syntactic and semantic properties. Dis- Mooney, 2010; Huang et al., 2012; Tian et al., 2014; Nee- tributed representations help address the curse of dimension- lakantan et al., 2014] were proposed to learn multi-prototype ality and improve generalization because they can group the word embeddings according to the different contexts. These words having similar semantic and syntactic roles. Therefore, models generate multi-prototype vectors by locally clustering distributed representations are widely used for many natu- the contexts for each individual word. This locality ignores ral language process (NLP) tasks, such as syntax [Turian et the correlations among words as well as their contexts. To al., 2010; Collobert et al., 2011; Mnih and Hinton, 2007], avoid this limitation, Liu et al.[2015] introduced latent topic semantics[Socher et al., 2012] and morphology [Luong et al., model [Blei et al., 2003] to globally cluster the words into dif- 2013]. ferent topics according to their contexts. They proposed three However, most of these methods use the same embedding intuitive models (topical word embeddings, TWE) to en- vector to represent a word, which is somehow unreasonable hance the discriminativeness of word embeddings. However, and sometimes it even hurts the model’s expression ability their models do not model clearly the interactions among the because a great deal of words are polysemous. For example, words, topics and contexts. all the occurrences of the word “bank” will have the same We assume that the single-prototype word embedding can embedding, irrespective of whether the context of the word be regarded as a mixture of its different prototypes, while suggests it means “a financial institution” or “a river bank”, the topic embedding is the averaged vector of all the words which results in the word “bank” having an embedding that is under this topic. Thus, the topic and single-prototype word embeddings should be regarded as two kinds of clustering of ∗Corresponding author word senses from different views. The topic embeddings and 1284 single-prototype word embeddings should have certain rela- The probability of not observing word c in the context of tions and should be modeled jointly. Thus, given a word with w is given by, its topic, a specific sense of the word can be determined by 1 its topic, the context-sensitive word embedding (also called P r(D = 0jw; c) = 1 − T (2) topical word embedding) should be obtained by integrating 1 + exp(−w c) word vector and topic vector. Given a training set D, the word embeddings are learned In this paper, we propose a neural tensor skip-gram model by maximizing the following objective function: (NTSG) to learn the distributed representations of words and X X topics, which is an extension to the Skip-Gram model and J(θ) = P r(D = 1jw; c) + P r(D = 0jw; c); replaces the bilinear layer with a tensor layer to capture w;c2D w;c2D0 more interactions between word and topic under different (3) contexts. Figure 1 illustrates the differences among Skip- 0 Gram, TWE and our model. Experiments show qualitative where the set D is randomly sampled negative examples, as- improvements of our model over single-sense Skip-Gram on suming they are all incorrect. word neighbors. We also perform empirical comparisons on two tasks, contextual word similarity and text classification, 3 Neural Tensor Skip-Gram Model which demonstrate the effectiveness of our model over the In order to enhance the representation capability of word em- other state-of-the-art multi-prototype models. beddings, we introduce latent topics and assume that each The main contributions of this work are as follows. word has different embeddings under different topics. For 1. Our model is a general architecture to learn multi- example, the word apple indicates a fruit under the topic prototype word embeddings, and uses a tensor layer to food, and indicates an IT company under the topic informa- model the interaction of words and topics. We also show tion technology (IT). the Skip-Gram and TWE models can be regarded as spe- Our goal is to be able to state whether a word w and its cial cases of our model. topic t can match well under the context c. For instance, (w; t) = (apple; company) matches well under the context 2. To improve the efficiency of the model , we use a low c = iphone, and (w; t)=(apple; fruit) is a nice match under rank tensor factorization approach that factorizes each the context c = banana. tensor slice as the product of two low-rank matrices. In this paper, we extend Skip-Gram model by replacing the bilinear layer with a tensor layer to capture the inter- 2 Neural Models For Word Embeddings actions between the words and topics under different con- texts. A tensor is a geometric object that describes relations Although there are many methods to learn vector representa- among vectors, scalars, and other tensors. It can be repre- tions for words from a large collection of unlabeled data, here sented as a multi-dimensional array of numerical values. An we focus only on the most relevant methods to our model. advantage of the tensor is that it can explicitly model multi- Bengio et al.[2003] represents each word token by a vector ple interactions in data. As a result, tensor-based model have for neural language models and estimates the parameters of been widely used in a variety of tasks [Socher et al., 2013a; the neural network and these vectors jointly. Since this model 2013b]. is quite expensive to train, much research has focused on opti- To compute the score of how likely it is that word w and mizing it, such as C&W embeddings [Collobert and Weston, its topic t in a certain context word c, we use the following 2008] and Hierarchical log-linear (HLBL) embeddings [Mnih energy-based function: ] and Hinton, 2007 . A recent considerable interesting work, T T [1:k] T word2vec [Mikolov et al., 2013a], uses extremely compu- g(w; c; t) = u f(w Mc t + Vc (w ⊕ t) + bc); (4) tationally efficient log-linear models to produce high-quality where w 2 Rd,t 2 Rd be the vector representations of the word embeddings, which includes two models: CBOW and word w and topic t; ⊕ is the concatenation operation and Skip-gram models [Mikolov et al., 2013b]. w w ⊕ t = M[1:k] 2 d×d×k Skip-Gram is an effective framework for learning word t ; c R is a tensor, and the vectors, which aims to predict surrounding words given a tar- d d get word in a sentence [Mikolov et al., 2013b]. In the Skip- bilinear tensor product takes two vectors w 2 R and t 2 R d as input, and generates a k-dimensional phrase vector z as Gram model, w 2 R is the vector representation of the word w 2 V, where V is the vocabulary and d is the dimensionality output, T [1:k] of word embedding. z = w Mc t; (5) Given a pair of words (w; c), the probability that the word where each entry of z is computed by one slice i = 1; ··· ; k c is observed in the context of the target word w is given by of the tensor: [i] zi = wMc t: (6) 1 The other parameters in Eq. (4) are the standard form of a P r(D = 1jw; c) = (1) k k×(2d) k 1 + exp(−wT c) neural network: u 2 R , Vc 2 R and bc 2 R . f is a standard nonlinearity applied element-wise, which is set to 1 where w and c are embedding vectors of w and c respectively. f(t) = 1+exp(−t) , same with Skip-Gram. 1285 Skip-Gram Skip-Gram is a well-know framework for learning word vector [Mikolov et al., 2013b], as show in Figure 1(A). Skip-Gram aims to predict context words given a target word in a sliding window. Given a pair of words (wi; c), we denote P r(cjwi) as the probability that the word c is observed in the context of the target word wi.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-