Efficient Sentence Embedding Via Semantic Subspace Analysis

Efficient Sentence Embedding Via Semantic Subspace Analysis

Efficient Sentence Embedding via Semantic Subspace Analysis Bin Wang, Fenxiao Chen, Yuncheng Wang, and C.-C. Jay Kuo Signal and Image Processing Institute Department of Electrical and Computer Engineering University of Southern California Los Angeles, CA, USA fwang699,fenxiaoc,[email protected], [email protected] Abstract—A novel sentence embedding method built upon semantic subspace analysis, called semantic subspace sentence Word Probability embedding (S3E), is proposed in this work. Given the fact ”I would like to book a flight on May that word embeddings can capture semantic relationship while 17th from Los Angeles to Beijing.” Word Weights semantically similar words tend to form semantic groups in a high-dimensional embedding space, we develop a sentence representation scheme by analyzing semantic subspaces of its Semantic Group Construction constituent words. Specifically, we construct a sentence model from two aspects. First, we represent words that lie in the same semantic group using the intra-group descriptor. Second, we characterize the interaction between multiple semantic groups Group K with the inter-group descriptor. The proposed S3E method is Group 1 Group 2 Group K-1 evaluated on both textual similarity tasks and supervised tasks. Experimental results show that it offers comparable or better performance than the state-of-the-art. The complexity of our S3E method is also much lower than other parameterized models. Inter-group Descriptor I. INTRODUCTION S3E-Embedding Word embedding technique is widely used in natural lan- guage processing (NLP) tasks. For example, it improves Fig. 1: Overview of the proposed S3E method. downstream tasks such as machine translation [1], syntac- tic parsing [2], and text classification [3]. Yet, many NLP applications operate at the sentence level or a longer piece learning from large unlabeled corpus. Different parameterized of texts. Although sentence embedding has received a lot models attempt to capture semantic and syntactic meanings of attention recently, encoding a sentence into a fixed-length from different aspects. Even though their performance is better vector to capture different linguistic properties remains to be as compared with non-parameterized models, parameterized a challenge. ones are more complex and computationally expensive. Since Universal sentence embedding aims to compute sentence it is challenging to deploy parameterized models into mobile representation that can be applied to any tasks. It can be cat- or terminal devices, finding effective and efficient sentence egorized into two types: i) parameterized models and ii) non- embedding models are necessary. parameterized models. Parameterized models are mainly based Non-parameterized sentence embedding methods rely on on deep neural networks and demand training in their param- high quality word embeddings. The simplest idea is to average eter updates. Inspired by the famous word2vec model [4], the individual word embeddings, which already offers a tough-to- skip-thought model [5] adopts an encoder-decoder model to beat baseline. By following along this line, several weighted arXiv:2002.09620v2 [cs.CL] 4 Mar 2020 predict context sentences in an unsupervised manner. InferSent averaging methods have been proposed, including tf-idf, SIF model [6] is trained by high quality supervised data; namely, [11], and GEM [12]. Concatenating vector representations of the Natural Language Inference data. It shows that supervised different resources yields another family of methods. Exam- training objective can outperform unsupervised ones. USE [7] ples include SCDV [13] and p-mean [14]. To better capture combines both supervised and unsupervised objectives and the sequential information, DCT [15] and EigenSent [16] were transformer architecture is employed. The STN model [8] proposed from a signal processing perspective. leverages a multi-tasking framework for sentence embedding Here, we propose a novel non-parameterized sentence em- to provide better generalizability. With the recent success of bedding method based on semantic subspace analysis. It is deep contextualized word models, SBERT [9] and SBERT- called semantic subspace sentence embedding (S3E) (see Fig. WK [10] are proposed to leverage the power of self-supervised 1). The S3E method is motivated by the following observation. Semantically similar words tend to form semantic groups in both methods are proposed under the assumption that Word a high-dimensional embedding space. Thus, we can embed a Mover’s Distance is a good standard for sentence representa- sentence by analyzing semantic subspaces of its constituent tion. In our work, we borrow the ’travel’ concept of embedded words. Specifically, we use the intra- and inter-group de- words in WMD’s method. And use covariance matrix to model scriptors to represent words in the same semantic group and the interaction between semantic concepts in a discrete way. characterize interactions between multiple semantic groups, respectively. III. PROPOSED S3E METHOD This work has three main contributions. As illustrated in Fig. 1, the S3E method contains three steps: 1) The proposed S3E method contains three steps: 1) 1) constructing semantic groups based on word vectors; 2) semantic group construction, 2) intra-group descriptor using the inter-group descriptor to find the subspace represen- and 3) inter-group descriptor. The algorithms inside each tation; and 3) using correlations between semantic groups to step are flexible and, as a result, previous work can be yield the covariance descriptor. Those are detailed below. easily incorporated. Semantic Group Construction. Given word w in the 2) To the best of our knowledge, this is the first work vocabulary, V , its uni-gram probability and vector are repre- d that leverages correlations between semantic groups to sented by p(w) and vw 2 R , respectively. We assign weights provide a sentence descriptor. Previous work using the to words based on p(w): covariance descriptor [17] yields super-high embedding weight(w) = ; (1) dimension (e.g. 45K dimensions). In contrast, the S3E + p(w) method can choose the embedding dimension flexibly. 3) The effectiveness of the proposed S3E method in textual where is a small pre-selected parameter, which is added to similarity and supervised tasks is shown experimentally. avoid the explosion of the weight when p(w) is too small. Its performance is as competitive as that of very com- Clearly, 0 < weight(w) < 1. Words are clustered into K plicated parametrized models.1 groups using the K-means++ algorithm [20], and weights are incorporated in the clustering process. This is needed since II. RELATED PREVIOUS WORK some words of higher frequencies (e.g. ’a’, ’and’, ’the’) are Vector of Locally Aggregated Descriptors (VLAD) is a less discriminative by nature. They should be assigned with famous algorithm in the image retrieval field. Same with lower weights in the semantic group construction process. Bag-of-words method, VLAD trains a codebook based on Intra-group Descriptor. After constructing semantic clustering techniques and concatenate the feature within each groups, we find the centroid of each group by computing the clusters as the final representation. Recently work called weighted average of word vectors in that group. That is, for VLAWE (vector of locally-aggregated word embeddings) [18], the ith group, Gi, we learn its representation gi by introduce this idea into document representation. However, 1 X gi = weight(w)vw; (2) VLAWE method suffers from high dimensionality problem jGij which is not favored by machine learning models. In this w2Gi work, a novel clustering method in proposed by taking word where jGij is the number of words in group Gi. For sentence frequency into consideration. At the same time, covariance S = fw1; w2; :::; wmg, we allocate words in S to their matrix is used to tackle the dimensionality explosion problem semantic groups. To obtain the intra-group descriptor, we of VLAWE method. compute the cumulative residual between word vectors and Recently, a novel document distance metric called Word their centroid (gi) in the same group. Then, the representation Mover’s Distance (WMD) [19] is proposed and achieved good of sentence S in the ith semantic group can be written as performance in classification tasks. Based on the fact that X semantically similar words will have close vector represen- vi = weight(w)(vw − gi): (3) tations, the distance between two sentences are models as w2S\Gi the minimal ’travel’ cost for moving the embedded words If there are K semantic groups in total, we can represent from one sentence to another. WMD targets on modeling the sentence S with the following matrix: distance between sentences in the shared word embedding 2 T 3 0 1 v1 v11 : : : v1d space. It is natural to consider the possibility of computing T 6v2 7 B v21 : : : v2d C the sentence representation directly from the word embedding Φ(S) = 6 7 = B C ; (4) 6 . 7 B . .. C space by semantical distance measures. 4 . 5 @ . A There are a few works trying to obtain sentence/document T v : : : v vK K1 Kd K×d representation based on Word Mover’s Distance. D2KE (dis- tances to kernels and embeddings) and WME (word mover’s where d is the dimension of word embedding. embedding) converts the distance measure into positive def- Inter-group Descriptor. After obtaining the intra-group inite kernels and has better theoretical guarantees. However, descriptor, we measure interactions between semantic groups with covariance coefficients. We can interpret Φ(S) in (4), as 1Our code is available at github. d observations of K-dimensional random variables, and use K×1 uΦ 2 R to denote the mean of each row in Φ. Then, the e) VLAWE [18]: Introduce VLAD (vector of locally inter-group covariance matrix can be computed as aggregated descriptor) into sentence embedding 1 field; C = [C ] = (Φ − µ )(Φ − µ )T 2 K×K ; (5) i;j K×K d Φ Φ R 2) Parameterized Models where a) Skip-thought [5]: Extend word2vec unsupervised (v − µ )T (v − µ ) training objectives from word level into sentence C = σ = i i j j : (6) i;j i;j d level; b) InferSent [6]: Bi-directional LSTM encoder trained is the covariance between groups i and j.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us