Mittens: an Extension of Glove for Learning Domain-Specialized Representations

Mittens: an Extension of Glove for Learning Domain-Specialized Representations

Mittens: An Extension of GloVe for Learning Domain-Specialized Representations Nicholas Dingwall Christopher Potts Roam Analytics Stanford University and Roam Analytics [email protected] [email protected] Abstract learn GloVe and Mittens representations on IMDB movie reviews and test them on separate IMDB re- We present a simple extension of the GloVe views using simple classifiers. In the second, we representation learning model that begins with general-purpose representations and updates learn our representations from clinical text and ap- them based on data from a specialized domain. ply them to a sequence labeling task using recur- We show that the resulting representations can rent neural networks, and to edge detection using lead to faster learning and better results on a simple classifiers. These experiments support our variety of tasks. hypothesis about Mittens representations and help identify where they are most useful. 1 Introduction Many NLP tasks have benefitted from the pub- 2 Mittens lic availability of general-purpose vector repre- This section defines the Mittens objective. We first sentations of words trained on enormous datasets, vectorize GloVe to help reveal why it can be ex- such as those released by the GloVe (Pennington tended into a retrofitting model. et al., 2014) and fastText (Bojanowski et al., 2016) teams. These representations, when used as model 2.1 Vectorizing GloVe inputs, have been shown to lead to faster learning For a word i from vocabulary V occurring in and better results in a wide variety of settings (Er- the context of word j, GloVe learns representa- han et al., 2009, 2010; Cases et al., 2017). tions wi and wj whose inner product approximates However, many domains require more special- the logarithm of the probability of the words’ co- ˜ ized representations but lack sufficient data to train occurrence. Biase terms bi and bj absorb the over- them from scratch. We address this problem with all occurrences of i and j. A weighting function f a simple extension of the GloVe model (Penning- is applied to emphasize word pairs that occur fre- ton et al., 2014) that synthesizes general-purpose quently and reduce the impact of noisy, low fre- representations with specialized data sets. The quency pairs. This results in the objective guiding idea comes from the retrofitting work of V Faruqui et al.(2015), which updates a space of ex- 2 J = f (X ) w>w + b + ˜b log X isting representations with new information from ij i j i j − ij i,j=1 a knowledge graph while also staying faithful to X e the original space (see also Yu and Dredze 2014; where Xij is the co-occurrence of i and j. Since Mrksiˇ c´ et al. 2016; Pilehvar and Collier 2016). We log Xij is only defined for Xij > 0, the sum ex- show that the GloVe objective is amenable to a cludes zero-count word pairs. As a result, exist- similar retrofitting extension. We call the resulting ing implementations of GloVe use an inner loop to model ‘Mittens’, evoking the idea that it is ‘GloVe compute this cost and associated derivatives. with a warm start’ or a ‘warmer GloVe’. However, since f(0) = 0, the second bracket Our hypothesis is that Mittens representations is irrelevant whenever Xij = 0, and so replacing synthesize the specialized data and the general- log Xij with purpose pretrained representations in a way that gives us the best of both. To test this, we con- k, for Xij = 0 g(Xij) = ducted a diverse set of experiments. In the first, we (log(Xij), otherwise 212 Proceedings of NAACL-HLT 2018, pages 212–217 New Orleans, Louisiana, June 1 - 6, 2018. c 2018 Association for Computational Linguistics Vocabulary size CPU GPU Implementation 5K 10K 20K 5K 10K 20K Non-vectorized TensorFlow 14.02 63.80 252.65 13.56 55.51 226.41 Vectorized Numpy 1.48 7.35 50.03 − − − Vectorized TensorFlow 1.19 5.00 28.69 0.27 0.95 3.68 Official GloVe (in C) 0.66 1.24 3.50 − − − Table 1: Speed comparisons. The values are seconds per iteration, averaged over 10 iterations each on 5 simulated corpora that produced count matrices with about 10% non-zero cells. Only the training step for each model is timed. The CPU experiments were done on a machine with a 3.1 GHz Intel Core i7 chip and 16 GB of memory, and the GPU experiments were done on machine with a 16 GB NVIDIA Tesla V100 GPU and 61 GB of memory. Dashes mark tests that aren’t applicable because the implementation doesn’t perform GPU computations. (for any k) does not affect the objective and reveals Here, R contains the subset of words in the new that the cost function can be readily vectorized as vocabulary for which prior embeddings are avail- able (i.e., R = V V where V is the vocabulary ∩ 0 0 J = f(X)M >M used to generate the prior embeddings), and µ is a non-negative real-valued weight. When µ = 0 or where M = W >W + b1> + 1˜b> g(X). W R is empty (i.e., there is no original embedding), − and W are matrices whose columns comprise the the objective reduces to GloVe’s. word and context embeddingf vectors, and g is ap- As in retrofitting, this objective encodes two op- pliedf elementwise. Because f(Xij) is a factor of posing pressures: the GloVe objective (left term), all terms of the derivatives, the gradients are iden- which favors changing representations, and the tical to the original GloVe implementation too. distance measure (right term), which favors re- To assess the practical value of vectorizing maining true to the original inputs. We can control 1 GloVe, we implemented the model in pure this trade off by decreasing or increasing µ. Python/Numpy (van der Walt et al., 2011) and in In our experiments, we always begin with TensorFlow (Abadi et al., 2015), and we compared 50-dimensional ‘Wikipedia 2014 + Gigaword 5’ these implementations to a non-vectorized Tensor- GloVe representations3 – henceforth ‘External Flow implementation and to the official GloVe C GloVe’ – but the model is compatible with any implementation (Pennington et al., 2014).2 The kind of “warm start”. results of these tests are in tab.1. Though the C implementation is the fastest (and scales to mas- sive vocabularies), our vectorized TensorFlow im- 2.3 Notes on Mittens Representations plementation is a strong second-place finisher, es- pecially where GPU computations are possible. GloVe’s objective is that the log probability of words i and j co-occurring be proportional to the 2.2 The Mittens Objective Function dot product of their learned vectors. One might worry that Mittens distorts this, thereby diminish- This vectorized implementation makes it apparent ing the effectiveness of GloVe. To assess this, we that we can extend GloVe into a retrofitting model simulated 500-dimensional square count matrices by adding a term to the objective that penalizes the and original embeddings for 50% of the words. squared euclidean distance from the learned em- Then we ran Mittens with a range of values of µ. bedding w = w + w to an existing one, r : i i i i The results for five trials are summarized in fig.1: 2 for reasonable values of µ, the desired correlation JbMittens = Je+ µ wi ri . k − k remains high (fig. 1a), even as vectors with initial i R X∈ embeddings stay close to those inputs, as desired 1https://github.com/roamanalytics/mittensb (fig. 1b). 2We also considered a non-vectorized Numpy implemen- tation, but it was too slow to be included in our tests (a single iteration with a 5K vocabulary took 2 hrs 38 mins). 3http://nlp.stanford.edu/data/glove.6B.zip 213 Representations Accuracy 95% CI 0.0 Random 62.00 [61.28, 62.53] 0.001 External GloVe 72.19 − 0.1 IMDB GloVE 76.38 [75.76, 76.72] µ 0.5 Mittens 77.39 [77.23, 77.50] 1.0 Table 2: IMDB test-set classification results. A differ- 5.0 ence of 1% corresponds to 250 examples. For all but 10.0 ‘External GloVE’, we report means (with bootstrapped 0.0 0.1 0.2 0.3 0.4 confidence intervals) over five runs of creating the em- Mean Pearson ρ beddings and cross-validating the classifier’s hyperpa- (a) Correlations between the dot product of pairs of learned rameters, mainly to help verify that the differences do vectors and their log probabilities. not derive from variation in the representation learning phase. pretrained initialization random initialization the left or right of j, with the counts weighted by 0.0 1/d where d is the distance in words from j. Only 0.001 words with at least 300 tokens are included in the 0.1 matrix, yielding a vocabulary of 3,133 words. For regular GloVe representations derived from µ 0.5 the IMDB data – ‘IMDB GloVE’ – we train 1.0 50-dimensional representations and use the de- 5.0 fault parameters from Pennington et al. 2014: 10.0 α = 0.75, xmax = 100, and a learning rate of 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.05. We optimize with AdaGrad (Duchi et al., Mean distance from initialization 2011), also as in the original paper, training for (b) Distances between initial and learned embeddings, for 50K epochs. words with and without pretrained initializations. As µ gets For Mittens, we begin with External GloVe. larger, the pressure to stay close to the original increases. The few words in the IMDB vocabulary that are Figure 1: Simulations assessing Mittens’ faithfulness not in this GloVe vocabulary receive random ini- to the original GloVe objective and to its input embed- tializations with a standard deviation that matches dings.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us