Efficient Parallel Learning of Word2vec

Efficient Parallel Learning of Word2vec

Efficient Parallel Learning of Word2Vec Jeroen B.P. Vuurens [email protected] The Hague University of Applied Science Delft University of Technology, The Netherlands Carsten Eickhoff [email protected] ETH Zurich Department of Computer Science, Zurich, Switzerland Arjen P. de Vries [email protected] Radboud University Nijmegen Institute for Computing and Information Sciences, Nijmegen, The Netherlands Abstract In recent work, Ji et al. argue that the current implemen- tations of Word2Vec do not scale well over the number Since its introduction, Word2Vec and its variants of used cores, and propose a solution that uses higher- are widely used to learn semantics-preserving level linear algebra functions to improve the efficiency of representations of words or entities in an em- Word2Vec using negative sampling (Ji et al., 2016). In this bedding space, which can be used to produce work, we analyze the scalability problem and conclude it is state-of-art results for various Natural Language mainly caused by simultaneous access of the same memory Processing tasks. Existing implementations aim by multiple threads. As a straightforward solution, we pro- to learn efficiently by running multiple threads pose to cache frequently updated vectors, and show a large in parallel while operating on a single model in gain in efficiency as a result. shared memory, ignoring incidental memory up- date collisions. We show that these collisions can degrade the efficiency of parallel learning, and 2. Related Work propose a straightforward caching strategy that Bengio et al. propose to learn a distributed represen- improves the efficiency by a factor of 4. tation for words in an embedding space based on the words that immediately precede a word in natural lan- guage (Bengio et al., 2006). They use a shallow neural net- 1. Introduction work architecture to learn a vector-space representation for words comparable to those obtained in latent semantic in- Traditional NLP approaches dominantly use simple bag- dexing. Recently, Mikolov et al. proposed a very efficient of-word representations of documents and sentences, but arXiv:1606.07822v1 [cs.CL] 24 Jun 2016 way to learn embeddings over large text corpora called recent approaches that use distributed representation of Word2Vec (Mikolov et al., 2013), which produces state- words, by constructing a so-called ”word embedding”, of-the-art results on various tasks, such as word similar- have been shown effective across many different NLP tasks ity, word analogy tasks, named entity recognition, machine (Weston et al., 2014). Mikolov et al. introduced efficient translation and question answering (Weston et al., 2014). strategies to learn embeddings from text corpora, which are Word2vec uses Stochastic Gradient Descent to optimize the collectively known as Word2Vec (Mikolov et al., 2013). vector representations by iteratively learning on words that They have shown that simply increasing the volume of appear jointly in text. training data improves the semantic and syntactic gener- alizations that are automatically encoded in embedding For efficient learning of word embeddings, Mikolov et al. space, and therefore efficiency is the key to increase the proposed two strategies that avoid having to update the potential of this technique. probability of observing every word in the vocabulary. Negative sampling consists of updating observed word Proceedings of the 33rd International Conference on Machine pairs together with a limited number of random word pairs Learning, New York, NY, USA, 2016. JMLR: W&CP volume to regularize weights. Alternatively, a hierarchical soft- 48. Copyright 2016 by the author(s). max is used as a learning objective, for which the output Efficient Parallel Learning of Word2Vec layer is transformed into a binary Huffmann tree and in- all words that share the same contextword (e.g. with a win- stead of predicting an observed word, the model learns the dow size of 5 up to 10 words would train against the same word’s position in the Huffmann tree by the left and right context term), with a matrix that contains the context word turns at each inner node along its path from the root. and a shared set of negative samples (Ji et al., 2016). Effi- ciency is improved up to 3.6x over the original Word2Vec The original Word2Vec implementation uses a Hogwild implementation, by the leveraging the improved ability for strategy of allowing processors lock-free access to shared matrix-matrix multiplication in high-end CPU’s, and only memory, in which they can update at will (Recht et al., updating the model after each mini-batch. Typically, the 2011). Recht et al. show that when data access is sparse batches are limited to about 10-20 words for convergence memory overwrites are rare and barely introduce error reasons, reporting only a marginal loss of accuracy. when they occur. However, Ji et al. notice that for train- ing Word2Vec this strategy does not use computational re- We suspect that for parallel training a model in shared sources efficiently (Ji et al., 2016). They propose to use memory the real bottleneck is latency caused by concurrent level-3 BLAS routines on specific CPU’s to operate on a access of the same memory. The solution that is presented mini-batches, which is then updated to the main memory by (Ji et al., 2016) uses level-3 BLAS functions to perform and reduces the access to main memory. They address the matrix-matrix multiplications, which implicitly lowers the Word2Vec variant that uses negative sampling. In our work number of times shared memory is accessed and updated. we address the hierarchical softmax variant, and show that We alternatively propose a straightforward caching strat- straightforward caching of frequently occurring data is suf- egy that provides a comparable efficiency gain over the hi- ficient to improve efficiency on conventional hardware. erarchical softmax variant, that is less hardware dependent since it does not need highly optimized level-3 BLAS func- 3. Experiment tions. When using the hierarchical softmax, efficiency is more affected by memory conflicts than with negative sam- 3.1. Analysis of inefficiencies for learning Word2Vec pling; the root node of the tree is in every word’s path and therefore gives a conflict every time, the direct children of For faster training of models, commonly, the input is pro- the rootwill bein thepath of half thevocabulary,etc. When cessed in parallel by multiple processors. One strategy is each thread caches the most frequently used upper nodes of to map the input into separate batches and aggregating the the tree and only updates the change to main memory after results in a reduce step. Alternatively, multithreading can a number of trained words, the number of memory conflicts be used on a single model in shared memory, which is used is reduced. by the original Word2Vec implementation in C as well as the widely used Gensim package for Python. We analyzed Algorithm 1 describes the caching in pseudocode, in which how the learning times scale over the used number of cores for the most frequently used inner nodes in the Huffmann with Word2Vec, and noticed the total run time appears to be tree a local copy is created and used for the updates and optimal when using approximately 8 cores, beyond which the non frequently used inner nodes are updated in shared efficiency degrades (see Figure 1). memory. After processing a number of words (typically in the range 10 to 100) for every cached weight vector the Ji et al. analyze this problem and conclude that multi- update is computed by subtracting the original value when ple threads can cause conflicts when they attempt to up- cached and adding this update to the weight vector in main date the same memory (Ji et al., 2016). Current Word2Vec memory. The vectors are processed using level-1 BLAS implementations simply ignore memory conflicts between functions, scopy to copy a vector, sdot to compute the threads, however, concurrent attempts to read and update dot product between two vectors and saxpy to add a scalar the same memory increases memory access latency. This multiplied by the first vector to the second vector. Hogwild strategy was introduced under the assumption that memory overwrites are rare, however, for natural language The proposed caching strategy is implemented in memory collisions may be more likely than for other data Python/Cython and published as part of the Cythnn open given the skew towards frequently occurring words. Col- source project1. lisions are even more likely when learning against a hier- archical softmax, since the top nodes in the Huffmann tree 4. Results are used by a large part of the vocabulary. 4.1. Experiment setup 3.2. Caching frequently used vectors To compare the efficiency of the proposed caching strat- For efficient learning of word embeddings with negative egy with the existing implementations, we used the skip- sampling, Ji et al. propose a mini-batch strategy that uses a 1http://cythnn.github.io level-3 BLAS function to multiply a matrix that consists of Efficient Parallel Learning of Word2Vec Algorithm 1 Cached Skipgram HS 4.2. Efficiency 1: for each wordid v in text do In Figure 1, we compare the changesin total run time when 2: for each wordid w within window of v do adding more cores between the original Word2Vec C im- 3: for each node n, turn t in treepath to v do plementation, Gensim, Cythnn without caching (c0) and 4: if isFrequent(n) then Cythnn when caching the 31 most frequently used nodes 5: if not cached[n] then in the Huffmann tree (c31). The first observation is that 6: scopy( weight[n], cachedweight[ n ]) Word2Vec and Cythnn without caching do have degrading 7: scopy( cachedweight[ n ], original[ n ]) efficiencies beyond the use of 8 cores, while Gensim is ap- 8: cached[n] = True parently more optimized to use up to 16 cores.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us