
Capturing Word Order in Averaging Based Sentence Embeddings Jae Hee Lee1 and Jose Camacho Collados2 and Luis Espinosa Anke3 and Steven Schockaert4 Abstract. One of the most remarkable findings in the literature on Despite the fact that word vector averages are thus far more expres- sentence embeddings has been that simple word vector averaging can sive than they may intuitively appear, they obviously cannot capture compete with state-of-the-art models in many tasks. While counter- information about word order. While this only affects their ability to intuitive, a convincing explanation has been provided by Arora et al., capture sentence similarity in a minimal way [3], it is nonetheless an who showed that the bag-of-words representation of a sentence can be important limitation of such representations. recovered from its word vector average with almost perfect accuracy. Beyond averaging based strategies, even relatively standard neural Beyond word vector averaging, however, most sentence embedding network models are able to learn sentence embeddings which cap- models are essentially black boxes: while there is abundant empirical ture word order nearly perfectly. For instance, LSTM autoencoders evidence about their strengths and weaknesses, it is not clear why learn an encoder, which maps a sequence of words onto a sentence and how different embedding strategies are able to capture particular vector, together with a decoder, which aims to reconstruct the original properties of sentences. In this paper, we focus in particular on how sequence from the sentence vector. While empirical results clearly sentence embedding models are able to capture word order. For in- show that such architectures can produce sentence vectors that capture stance, it seems intuitively puzzling that simple LSTM autoencoders word order, from an intuitive point of view it remains puzzling that are able to learn sentence vectors from which the original sentence such vectors can arise from a relatively simple manipulation of word can be reconstructed almost perfectly. With the aim of elucidating vectors. The aim of this paper is to develop a better understanding of this phenomenon, we show that to capture word order, it is in fact how order-encoding sentence vectors can arise in such a way. In partic- sufficient to supplement standard word vector averages with averages ular, we want to find the simplest extension of word vector averaging of bigram and trigram vectors. To this end, we first study the problem which is sufficient for learning sentence vectors that capture word of reconstructing bags-of-bigrams, focusing in particular on how suit- order. To this end, we follow the following intuition: since averages able bigram vectors should be encoded. We then show that LSTMs are of word vectors allow us to recover the bag-of-words representation capable, in principle, of learning our proposed sentence embeddings. of a sentence [3], averages of bigram vectors may allow us to recover Empirically, we find that our embeddings outperform those learned the bag-of-bigrams, and similar for longer n-grams. This paper makes by LSTM autoencoders on the task of sentence reconstruction, while the following three main contributions: needing almost no training data. 1. We present an in-depth study about how bigrams should be en- coded to maximize the probability that bags-of-bigrams can be 1 Introduction reconstructed from the corresponding bigram vector averages (Sec- tion 3). Sentence embeddings are vector representations that capture the mean- 2. We empirically show that simply concatenating averaged vector ing of a given sentence. Several authors have empirically found that representations of unigrams, bigrams and trigrams gives us sen- surprisingly high-quality sentence embeddings can be obtained by tence embeddings from which the original sentence can be recov- simply adding up the word vectors from a given sentence [29], some- ered more faithfully than from sentence embeddings learned by times in combination with particular weighting and post-processing LSTM autoencoders (Section 4). strategies [3]. Since word vectors are typically compared in terms of 3. We show that LSTM architectures are capable of constructing such their cosine similarity, summing up word vectors corresponds to a concatenations of unigram, bigram and trigram averages. Even form of averaging (i.e. the direction of the resulting sum averages the though LSTM autoencoders in practice are unlikely to follow this direction of the individual word vectors). For this reason, we refer exact strategy, this clarifies why simple manipulations of word to this class of methods as averaging based sentence embeddings. vectors are sufficient for encoding word order (Section 5).5 Recently, [2] has shed some light on the remarkable effectiveness of averaging based embeddings, using insights from the theory of com- pressed sensing. Essentially the paper showed that from a given sum 2 Related Work of word vectors, it is almost always possible to reconstruct the corre- sponding bag-of-words representation, as long as the dimensionality Sentence embedding is a widely-studied topic in the field of rep- of the vectors is sufficiently high relative to the length of the sentence. resentation learning. Similarly to the predictive objective of word embedding models such as Skip-gram [18], recent unsupervised sen- 1 Cardiff University, UK, email: [email protected] tence embedding models have based their architecture on predicting 2 Cardiff University, UK, email: [email protected] 3 Cardiff University, UK, email: [email protected] 5 The code for reproducing the results in the paper can be downloaded from 4 Cardiff University, UK, email: [email protected] https://github.com/dschaehi/capturing-word-order the following sentence, given a target sentence. A popular example us to partially reconstruct the order in which words appear in the of this kind of model is Skip-Thought [15]. In this model a recur- sentence, by exploiting regularities in natural language, e.g. the fact rent neural network is employed as part of a standard sequence to that certain words are more likely to appear at the start of a sentence sequence architecture for encoding and decoding. While using the than at the end. The extent to which unigram averages can capture next sentence as a supervision signal has proved very powerful, one information about word order in this way was analyzed in [1]. They disadvantage of this model is that it is computationally demanding. found that while unigram averages can to some extend correctly clas- For this reason, variations have been proposed which frame the ob- sify whether the order of a word pair in a sentence is switched (70% jective as a classification task, instead of prediction, which allows accuracy), they lead to almost random predictions (51% accuracy) for more efficient implementations. A representative example of this when the task is to classify sentences whose bigrams are switched [9]. strategy is the quick thoughts model from [16]. As another line of The latter result in particular indicates that unigram averages are not work, large pre-trained language models such as BERT have shown helpful for reconstructing sentences. strong performance in many downstream tasks [10], and are also able The possibility of using n-gram averages in sentence embeddings to extract high-quality sentence embeddings [25]. In spite of these was already considered by Arora et al. [2], who constructed n-gram advancements, [3] showed that a simple average of word embeddings vectors using component-wise multiplication of standard word vectors can lead to competitive performance, while being considerably more (cf. Section 2.2). However, they focused on the usefulness of such n- transparent, computationally efficient, and lower-dimensional than gram averages for measuring sentence similarity and did not consider most other sentence embeddings. In this paper we focus on the latter the sentence reconstruction task. As an alternative to using n-grams, kind of approach. the ordinally forgetting strategy [30, 27] is also designed to capture To the best of our knowledge, the idea of capturing word order word order based on averaging. Their representations are weighted using averaging based strategies has not previously been considered. averages of one-hot encodings, where the word at position i in a However, there is a considerable amount of work on averaging based sentence is weighted by αi for some constant α 2]0; 1[. However, sentence embeddings, which we discuss in Section 2.1. There is also while it is possible to reconstruct the initial sentence from the resulting some related work on capturing word order in sentence vectors, as we vector, due to the use of one-hot encodings, the dimensionality of this discuss in Section 2.2. vector is prohibitively high. In particular, this earlier work therefore does not provide any real insights about how word order can be 2.1 Averaging Based Sentence Embeddings captured in neural network models such as LSTMs, which rely on dense vectors. Several extensions to the basic approach of word vector averaging Finally, the popular transformer model [28] encodes positional have already been proposed. For example, FastSent [12] learns word information by manipulating word vectors based on their position vectors in such a way that the resulting averaging based sentence in a sentence. While this manipulation is sufficient for capturing in- vectors are predictive of the words that occur in adjacent sentences. formation about the position of words in deep networks based on In this way, the supervision signal that is exploited by models such as transformers, it is unclear whether such a strategy could be adapted to Skip-Thought can be exploited by averaging models as well. Simi- work well with averaging based encodings. The main problem with larly, Siamese CBOW [14] also focuses on learning sentence vectors this strategy, in our setting, is that for a sentence with 25 words, the that are good predictors for adjacent sentence vectors.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-