Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies

Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies

Published as a conference paper at ICLR 2021 ANCHOR &TRANSFORM: LEARNING SPARSE EMBEDDINGS FOR LARGE VOCABULARIES Paul Pu Liang♡♠∗, Manzil Zaheer♡, Yuan Wang♡, Amr Ahmed♡ ♡Google Research, ♠Carnegie Mellon University [email protected], {manzilzaheer,yuanwang,amra}@google.com ABSTRACT Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g., natural groupings and similarities) and embed the objects independently into individual vectors. As a result, existing methods do not scale to large vocabulary sizes. In this paper, we design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix. We call our method ANCHOR &TRANSFORM (ANT) as the embeddings of discrete objects are a sparse linear combination of the anchors, weighted according to the transformation matrix. ANT is scalable, flexible, and end-to-end trainable. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric prior for embeddings that encourages sparsity and leverages natural groupings among objects. By deriving an approximate inference algorithm based on Small Variance Asymptotics, we obtain a natural extension that automatically learns the optimal number of anchors instead of having to tune it as a hyperparameter. On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes and demonstrates stronger performance with fewer parameters (up to 40× compression) as compared to existing compression baselines. Code for our experiments can be found at https://github.com/pliang279/ sparse_discrete. 1 INTRODUCTION Most machine learning models, including neural networks, operate on vector spaces. Therefore, when working with discrete objects such as text, we must define a method of converting objects into vectors. The standard way to map objects to continuous representations involves: 1) defining the vocabulary V = {v1; :::; vSV S} as the set of all objects, and 2) learning a SV S × d embedding matrix that defines a d dimensional continuous representation for each object. This method has two main shortcomings. Firstly, when SV S is large (e.g., million of words/users/URLs), this embedding matrix does not scale elegantly and may constitute up to 80% of all trainable parameters (Jozefowicz et al., 2016). Secondly, despite being discrete, these objects usually have underlying structures such as natural groupings and arXiv:2003.08197v4 [cs.LG] 11 Mar 2021 similarities among them. Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing. As a result, there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training, storage, and inference. In this paper, we propose a simple method to learn sparse representations that uses a global set of vectors, which we call the anchors, and expresses the embeddings of discrete objects as a sparse linear combination of these anchors, as shown in Figure 1. One can consider these anchors to represent latent topics or concepts. Therefore, we call the resulting method ANCHOR &TRANSFORM (ANT). The approach is reminiscent of low-rank and sparse coding approaches, however, surprisingly in the literature these methods were not elegantly integrated with deep networks. Competitive attempts are often complex (e.g., optimized with RL (Joglekar et al., 2019)), involve multiple training stages (Ginart et al., 2019; Liu et al., 2017), or require post-processing (Svenstrup et al., 2017; Guo et al., 2017; Aharon et al., 2006; Awasthi & Vijayaraghavan, 2018). We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner. ANT is ∗work done during an internship at Google. 1 Published as a conference paper at ICLR 2021 |A| d Frequency + Clustering “the” d d “good” |A| A |V| T x |A| AE = = |V| E eg. frequency, clustering pretrained space e.g. GloVe/co-occurrence Confidential + Proprietary <latexit sha1_base64="f4Wrd5JDLs5tNy5KUn+TkbEGO1o=">AAACB3icbZDLSgMxFIYz9VbrbdSlIMEiuCozVdRlwY3LCr1BO5RMmmlDk8yQnBHL0J0bX8WNC0Xc+grufBvTy0Jbfwh8/OccTs4fJoIb8LxvJ7eyura+kd8sbG3v7O65+wcNE6easjqNRaxbITFMcMXqwEGwVqIZkaFgzXB4M6k375k2PFY1GCUskKSveMQpAWt13eMOsAfQMiuX8HjKhmY1TZSJYi3HXbfolbyp8DL4cyiiuapd96vTi2kqmQIqiDFt30sgyIgGTgUbFzqpYQmhQ9JnbYuKSGaCbHrHGJ9ap4ftYvsU4Kn7eyIj0piRDG2nJDAwi7WJ+V+tnUJ0HWRcJSkwRWeLolRgiPEkFNzjmlEQIwuEam7/iumAaELBRlewIfiLJy9Do1zyz0ve3UWxcjmPI4+O0Ak6Qz66QhV0i6qojih6RM/oFb05T86L8+58zFpzznzmEP2R8/kD6eSZ8A==</latexit> <latexit sha1_base64="FOU3uR7cJxEH1tT/3fwpDlJtBwc=">AAACBHicbZC7TsMwFIYdrqXcAoxdLCokpigBBIxFLIxFohepjSrHdVqrjhPZJ4gq6sDCq7AwgBArD8HG2+CmGaDllyx9+s85Oj5/kAiuwXW/raXlldW19dJGeXNre2fX3ttv6jhVlDVoLGLVDohmgkvWAA6CtRPFSBQI1gpG19N6654pzWN5B+OE+REZSB5ySsBYPbvSBfYAKso8B09y1jS7knQYq0nPrrqOmwsvgldAFRWq9+yvbj+macQkUEG07nhuAn5GFHAq2KTcTTVLCB2RAesYlCRi2s/yIyb4yDh9HMbKPAk4d39PZCTSehwFpjMiMNTztan5X62TQnjpZ1wmKTBJZ4vCVGCI8TQR3OeKURBjA4Qqbv6K6ZAoQsHkVjYhePMnL0LzxPFOHff2rFo7L+IooQo6RMfIQxeohm5QHTUQRY/oGb2iN+vJerHerY9Z65JVzBygP7I+fwA/T5hw</latexit> 2. Transform 1. Anchor Confidential + Proprietary Figure 1: ANCHOR &TRANSFORM (ANT) consists of two components: 1) ANCHOR: Learn embeddings A of a small set of anchor vectors A = {a1; :::; aSAS}; SAS << SV S that are representative of all discrete objects, and 2) TRANSFORM: Learn a sparse transformation T from the anchors to the full embedding matrix E. A and T are trained end-to-end for specific tasks. ANT is scalable, flexible, and allows the user to easily incorporate domain knowledge about object relationships. We further derive a Bayesian nonparametric view of ANT that yields an extension NBANT which automatically tunes SAS to achieve a balance between performance and compression. scalable, flexible, and allows the user flexibility in defining these anchors and adding more constraints on the transformations, possibly in a domain/task specific manner. We find that our proposed method demonstrates stronger performance with fewer parameters (up to 40× compression) on multiple tasks (text classification, language modeling, and recommendation) as compared to existing baselines. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric (BNP) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects. Specifically, we show its equivalence to Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior for embedding matrices. While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies (Teh & Jordan, 2010), sparsity (Knowles & Ghahramani, 2011), and other structural constraints (Roy et al., 2016), these inference methods are usually complex, hand designed for each setup, and non-differentiable. Our proposed method opens the door towards integrating priors (e.g., IBP) with neural representation learning. These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics (SVA; Roweis (1998)), we obtain a natural extension, NBANT, that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter. 2 RELATED WORK Prior work in learning sparse embeddings of discrete structures falls into three categories: Matrix compression techniques such as low rank approximations (Acharya et al., 2019; Grachev et al., 2019; Markovsky, 2011), quantizing (Han et al., 2016), pruning (Anwar et al., 2017; Dong et al., 2017; Wen et al., 2016), or hashing (Chen et al., 2015; Guo et al., 2017; Qi et al., 2017) have been applied to embedding matrices. However, it is not trivial to learn sparse low-rank representations of large matrices, especially in conjunction with neural networks. To the best of our knowledge, we are the first to present the integration of sparse low-rank representations, their non-parametric extension, and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity. We also outperform many baselines based on low-rank compression (Grachev et al., 2019), sparse coding (Chen et al., 2016b), and pruning (Liu et al., 2017). Reducing representation size: These methods reduce the dimension d for different objects. Chen et al. (2016a) divides the embedding into buckets which are assigned to objects in order of impor- tance, Joglekar et al. (2019) learns d by solving a discrete optimization problem with RL, and Baevski & Auli (2019) reduces dimensions for rarer words. These methods resort to RL or are difficult to tune with many hyperparameters. Each object is also modeled independently without information sharing. Task specific methods include learning embeddings of only common words for language model- ing (Chen et al., 2016b; Luong et al., 2015), and vocabulary selection for text classification (Chen et al., 2019). Other methods reconstruct pre-trained embeddings using codebook learning (Chen et al., 2018; Shu & Nakayama, 2018) or low rank tensors (Sedov & Yang, 2018). However, these methods cannot work for general tasks. For example, methods that only model a subset

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us