
Word Embeddings Evaluation and Combination Sahar Ghannay1, Benoit Favre2, Yannick Esteve` 1 and Nathalie Camelin1 1LIUM - University of Le Mans, 72000, Le Mans, France 2Aix-Marseille Universite,´ CNRS, LIF UMR 7279, 13000, Marseille, France Abstract Word embeddings have been successfully used in several natural language processing tasks (NLP) and speech processing. Different approaches have been introduced to calculate word embeddings through neural networks. In the literature, many studies focused on word embedding evaluation, but for our knowledge, there are still some gaps. This paper presents a study focusing on a rigorous comparison of the performances of different kinds of word embeddings. These performances are evaluated on different NLP and linguistic tasks, while all the word embeddings are estimated on the same training data using the same vocabulary, the same number of dimensions, and other similar characteristics. The evaluation results reported in this paper match those in the literature, since they point out that the improvements achieved by a word embedding in one task are not consistently observed across all tasks. For that reason, this paper investigates and evaluates approaches to combine word embeddings in order to take advantage of their complementarity, and to look for the effective word embeddings that can achieve good performances on all tasks. As a conclusion, this paper provides new perceptions of intrinsic qualities of the famous word embedding families, which can be different from the ones provided by works previously published in the scientific literature. Keywords: Word embeddings, benchmarking, speech processing, natural language processing 1. Introduction The evaluation can be performed as well on the word sim- Word embeddings are projections in a continuous space ilarity and analogical reasoning tasks, like in (Levy and of words supposed to preserve the semantic and syntac- Goldberg, 2014; Ji et al., 2015; Gao et al., 2014; Levy et al., tic similarities between them. They have been shown to 2015). Recently, the study proposed by (Levy et al., 2015), be a great asset for several Natural Language Processing focuses on the evaluation of neural-network-inspired word (NLP) tasks, like part-of-speech tagging, chunking, named embedding models (Skip-gram and GloVe) and traditional entity recognition, semantic role labeling, syntactic pars- counted-based distributional models - pointwise mutual in- ing (Bansal et al., 2014a; Turian et al., 2010; Collobert et formation (PMI) and Singular Value Decomposition (SVD) al., 2011), and also for speech processing: for instance, models-. This study reveals that the hyperparameter opti- word embeddings were recently involved in spoken lan- mizations and certain system design choices have a con- guage understanding (Mesnil et al., 2015), in detection of siderable impact on the performance of word embeddings, errors in automatic transcriptions, and in calibration of con- rather than the embedding algorithms themselves. More- fidence measures provided by an automatic speech recog- over, it shows that, by adapting and transferring the hyper- nition system (Ghannay et al., 2015). parameters into the traditional distributional models, they These word representations were introduced through the achieve similar gains as the neural-network word embed- construction of neural language models (Bengio et al., dings. 2003; Schwenk, 2013). Different approaches have been In this paper, we present a rigorous comparison of proposed to compute them from large corpora. They in- the performances of different kinds of word embed- clude neural networks (Collobert et al., 2011; Mikolov et dings coming from different available implementations: al., 2013a; Pennington et al., 2014), dimensionality reduc- word2vec (Mikolov et al., 2013a), GloVe (Pennington tion on the word co-occurrence matrix (Lebret and Col- et al., 2014), CSLM (Schwenk, 2007; Schwenk, 2013) lobert, 2013), and explicit representation in terms of the and word2vecf on dependency trees (Levy and Goldberg, context in which words appear (Levy and Goldberg, 2014). 2014). Some of them were never compared; for instance, One particular hypothesis behind word embeddings is that word2vec embeddings (Mikolov et al., 2013a) have been they are generic representations that shall suit most appli- never compared to the CSLM toolkit, which is able to build cations. deep feedforward neural network language models on large Many studies have focused on the evaluation of word em- datasets because of an efficient code optimized for GPUs. beddings intrinsic quality, as well as their impact when Moreover, dependency-based word embeddings (Levy and they are used as input of systems. Turian et al. (Turian et Goldberg, 2014) have been never compared to CSLM, al., 2010) evaluate different types of word representations GloVe or Skip-gram (Mikolov et al., 2013a) embeddings. and their concatenation on the chunking and named entity In order to measure the supposed semantic and syntactic in- recognition tasks. formation captured by word embeddings, we evaluate their performance for different NLP tasks as well as on linguistic This work was partially funded by the European Commission tasks. through the EUMSSI project, under the contract number 611057, In some state of the art studies (Mikolov et al., 2013a; in the framework of the FP7-ICT-2013-10 call, by the French Na- Mikolov et al., 2013b; Bansal et al., 2014b), the evaluated tional Research Agency (ANR) through the VERA project, under the contract number ANR-12-BS02-006-01, and by the Region´ word embeddings were estimated on different training data, Pays de la Loire. or with different dimensionality. In this study all the word 300 embeddings are estimated on the same training data, using frequent words as outputs of a the neural network. This ar- the same vocabulary, the same dimensionality, and the same chitecture is more complex and more time-consuming to window size. train than the three approaches presented above, but the In addition to these word embeddings evaluation, we are in- computation time is reasonable due to the ability of the terested on their combination through concatenation, Prin- GPU implementations. cipal Component Analysis and ordinary autoencoder in or- der to look for an effective embedding that can achieve 2.3. Dependency-based word embeddings good performance on all tasks. (Levy and Goldberg, 2014) proposed an extension of The paper is organized along the following lines: section 2. word2vec, called word2vecf and denoted w2vf-deps, presents the different types of word embeddings evaluated which allows to replace linear bag-of-words contexts with in this study. Section 3. describes the benchmark tasks. The arbitrary features. This model is a generalization of the experimental setup and results are described in section 4., skip-gram model with negative sampling introduced by and the conclusion in Section 5.. (Mikolov et al., 2013a), and it needs labeled data for train- ing. As in (Levy and Goldberg, 2014), we derive contexts 2. Word embeddings from dependency trees: a word is used to predict its gov- Different approaches have been proposed to create word ernor and dependents, jointly with their dependency labels. embeddings through neural networks. These approaches This effectively allows for variable-size. differ in the type of the architecture and the data used to 3. Benchmark tasks train the model. In this study, we distinguish three cate- 3.1. NLP tasks gories of word embeddings: the ones estimated on unla- beled data based on simple or deep architectures, and oth- In this sub-section, we briefly introduce the NLP tasks ers estimated from labeled data. These representations are on which we evaluate the performance of the different detailed respectively in the next subsections. word embeddings: part-of-speech tagging (POS), syntac- tic chunking (CHK), named entity recognition (NER), and 2.1. Fast and simple estimation of word mention detection (MENT). embeddings For each of these tasks, a label has to be predicted for each This section presents three types of word embeddings com- word in context. Therefore we model the problem as feed- ing from two available implementations: ing a neural network with the concatenation of the five word embeddings of a 5-gram as input. This 5-gram is centered • CBOW: This architecture, proposed by (Mikolov et on the word for which the prediction has to be made by the al., 2013a), is similar to a feedforward Neural Net- neural network. If an embedding does not exist for one of work Language Model (NNLM) where the non-linear the words, it is replaced with 0. Words outside sentence hidden layer is removed, and the contextual words are boundaries are replaced with 0. projected on the same position. It consists in predict- We test word embeddings in the context of the following ing a word given its past and future context, by aver- tasks: aging the contextual word vectors and then running a • Part-Of-Speech Tagging (POS): categorizing words log-linear classifier on the averaged vector to get the among 48 morpho-syntactic labels (noun, verb, adjec- resultant word. tive, etc.). The system is evaluated on the standard • Skip-gram: This second architecture from (Mikolov Penn Treebank benchmark train/dev/test split (Marcus et al., 2013a) is similar to CBOW, trained using the et al., 1993). negative-sampling procedure. It consists in predicting • Chunking (CHK): segmenting sentences in proto- the contextual words given the current word. Also, syntactic constituents. There are 22 begin-inside- the context is not limited to the immediate context, outside encoded word-level labels. The system is and training instances can be created by skipping a evaluated on the CoNLL 2000 benchmark (Tjong constant number of words in its context, for instance, Kim Sang and Buchholz, 2000). wi−3 ,wi−4 ,wi+3 ,wi+4 , hence the name skip-gram. • Named Entity Recognition (NER): recognizing named • GloVe: This approach is introduced by (Penning- entities in the text, such as persons, locations and or- ton et al., 2014), and relies on constructing a global ganizations.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-