Neural Word Representations from Large-Scale Commonsense Knowledge

Neural Word Representations from Large-Scale Commonsense Knowledge

Neural Word Representations from Large-Scale Commonsense Knowledge Jiaqiang Chen Niket Tandon Gerard de Melo IIIS, Tsinghua University Max Planck Institute for Informatics IIIS, Tsinghua University Beijing, China Saarbrucken,¨ Germany Beijing, China Email: [email protected] Email: [email protected] Email: [email protected] Greek are likely related. Our training objective thus encourages saliently related words to have similar representations. Abstract—There has recently been a surge of research on neural network-inspired algorithms to produce numerical vector representations of words, based on contextual information. In this II. RELATED WORK paper, we present an approach to improve such word embeddings Many of the current methods for obtaining distributed word by first mining cognitively salient word relationships from text and then using stochastic gradient descent to jointly optimize embeddings are neural network-inspired and aim at rather the embeddings to reflect this information, in addition to the dense real-valued vector spaces. While early work focused regular contextual information captured by the word2vec CBOW on probabilistic language models [3], Collobert et al. [4] used objective. Our findings show that this new training regime leads to a convolutional neural network to maximize the difference vectors that better reflect commonsense information about words. between scores from text windows in a large training corpus and corresponding randomly generated negative examples. Mikolov et al. [1] proposed simplified network architectures I. INTRODUCTION to efficiently train such vectors at a much faster rate and thus also at a much larger scale. Their word2vec1 implementation Words are substantially discrete in nature, and hence, tradi- provides two architectures, the CBOW and the Skip-gram tionally, the vast majority of natural language processing tools, models. CBOW also relies on a window approach, attempting including statistical ones, have regarded words as distinct atomic to use the surrounding words to predict the current target word. symbols. In recent years, however, the idea of embedding words However, it simplifies the hidden layer to be just the average in a vector space using neural network-inspired algorithms has of surrounding words’ embeddings. The Skip-gram model tries gained enormous popularity. Mapping words to vectors in a to do the opposite. It uses the current word to predict the way that reflects word similarities provides machine learning surrounding words. In our approach, we build on the CBOW algorithms with much-needed generalization ability. If the words variant, as its optimization runs faster. car and automobile have similar vectors, a learning algorithm is better equipped to generalize from one word to the other. There have been other proposals to adapt the word2vec Word embeddings are typically trained using large amounts of models. Levy et al. [5] use dependency parse relations to contextual data. While regular sentence contexts play a vital create word embeddings that are able to capture contextual role in meaning acquisition, words and concepts are often also relationships between words that are further apart in the acquired by other means. Humans may pay special attention sentence. Further analysis revealed that their word embeddings to certain cognitively salient features of an object, or rely on capture more functional but less topical similarity. Faruqui et more explicit definitions (e.g., looking up a meaning online). al. [6] apply post-processing steps to existing word embeddings in order to bring them more in accordance with semantic In this paper, we propose a model to jointly train word lexicons. Rather than using rich structured knowledge sources, representations not just on regular contexts, as in the word2vec our work focuses on improving word embeddings using textual CBOW model [1], but also to reflect more salient information. data, by relying on information extraction to expose particularly For the latter, we use information extraction techniques [2] valuable contexts and relationships in a text corpus. In particular, on large-scale text data to mine definitions and synonyms we are not aware of any previous work that mines large-scale as well as lists and enumerations. Rather than considering all common-sense knowledge to train embeddings of lexical units. contexts as equal, our approach can be viewed as treating certain specific contexts as more informative than others. Consider III. SALIENT PROXIMITY MODEL the sentence The Roman Empire was remarkably multicultural, with ”a rather astonishing cohesive capacity” to create a sense Our approach is to simultaneously train the word embed- of shared identity.... While it contains several useful signals, dings on generic contexts from the corpus on the one hand and Roman does not seem to bear an overly close relationship with on semantically significant contexts, obtained using extraction capacity, astonishing, or sense. In contrast, upon encountering techniques, on the other hand. For the regular general contexts, Greek and Roman mythology, we may conclude that Roman and we draw on the word2vec CBOW model [1] to predict a word given its surrounding neighbors in the corpus. This research was supported by China 973 Program Grants 2011CBA00300, 2011CBA00301, and NSFC Grants 61033001, 61361136003, 61450110088. 1https://code.google.com/p/word2vec/ At the same time, our model relies on our ability to extract as negative sample pairs. We attempt to maximize the score semantically salient contexts that are more indicative of word for the positive training data and minimize the score of the meanings. These extractions will be described in detail later negative samples. In the training procedure, this amounts to in Section IV. Our algorithm assumes that they have been simply generating k random negative samples for each extracted transformed into a set of word pairs likely to be closely related, word pair. That is, we replace wr with random words from which are used to modify the word embeddings. Due to this the vocabulary. For the negative samples, we assign the label more focused information, we expect the final word embeddings l = 0, whereas for the original word pairs, l = 1. Now, for to reflect more semantic information than embeddings trained each word pair we try to minimize its loss L: only on regular contexts. Given an extracted pair of related words, the intuition is that the embeddings for the two words L = −l log f − (1 − l) log(1 − f) (5) f = σ(vT · v ) should be pulled together. Given a word wt, our objective wt wr (6) function attempts to maximize the probability of finding its related words wr: We use stochastic gradient descent to optimize this function. T The formulae for the gradient are easy to compute: 1 X X log p(w jw ) (1) T r t @L 1 1 t=1 wr = −l f (1 − f) Vwt + (1 − l) f (1 − f) Vwt @Vwr f 1 − f Here, T is the vocabulary size and the probabilities are modeled = −(l − f) Vw (7) using a softmax, defined as follows: t @L T = −(l − f) V (8) exp(V · V ) wr wr wr @V p(w jw ) = (2) wt r t P exp(V T · V ) w 0 wr0 wt r This objective is optimized alongside with the original V , V refer to the word vectors for two related words w , wr wt r word2vec CBOW objective, i.e., our overall model combines w , while w 0 with corresponding vectors V refer to all t r wr0 the two objectives. Implementation-wise, we train the model possible words. We use the inner product to score how well in parallel with the CBOW model, which allows us to inject two words match. When they are very similar or related, their the extracted knowledge into the word vectors such that they embeddings should be close to each other and hence the inner are reflected during the CBOW training rather than just as a product of their embeddings should be large. Using the softmax post-processing step. Thus we obtain a joint learning process function, we can take a maximum likelihood approach to train in which the two components are able to mutually influence the embeddings in a tractable manner. However, the gradient each other. Both objectives contribute to the embeddings’ is as follows: ability to capture semantic relationships. Training with the T P T @V ·V log w exp(Vw ·Vw ) @ log p(wrjwt) wr wt r0 r0 t extracted contexts enables us to adjust word embeddings based @V = @V − @V wt wt wt on concrete evidence of semantic relationships, while the use of exp(Vw ·Vw ) P r0 r general corpus contexts enables us to maintain the advantages = Vwr − 0 P T Vwr0 wr w exp(Vw ·Vwt ) P r0 r0 of the word2vec CBOW model, in particular its ability to benefit = Vw − p(wr0 jwt)Vw 0 r wr0 r from massive volumes of raw corpus data. = Vwr − EpVwr0 (3) IV. INFORMATION EXTRACTION To compute this gradient, the expectation of all the word vectors with respect to their probabilities would be needed. The Our model can flexibly incorporate semantic relationships time complexity for this is proportional to the vocabulary size, extracted using various kinds of information extraction methods. which is typically very large. Here, we use negative sampling Different kinds of sources and extraction methods can bring as a speed-up technique [7]. This can be viewed as a simplified different sorts of information to the vectors, suitable for different version of Noise Contrastive Estimation (NCE) [8], which applications. In our experiments, we investigate two main reduces the problem of determining the softmax to that of sources: a dictionary corpus from which we extract definitions binary classification, discriminating between samples from the and synonyms, and a general Web corpus, from which we data distribution and negative samples. In particular, we consider extract lists. Our model could similarly be used with other a distribution of random noise and optimize for discriminating extraction methods.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us