Evaluating Word Embedding Models: Methods and Experimental Results Bin Wang,1∗ Angela Wang,2∗ Fenxiao Chen,1 Yuncheng Wang1 and C.-C

Total Page:16

File Type:pdf, Size:1020Kb

Evaluating Word Embedding Models: Methods and Experimental Results Bin Wang,1∗ Angela Wang,2∗ Fenxiao Chen,1 Yuncheng Wang1 and C.-C SIP (2019),vol.8,e19,page1of14©TheAuthors,2019. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. doi:10.1017/ATSIP.2019.12 overview paper Evaluating word embedding models: methods and experimental results bin wang,1∗ angela wang,2∗ fenxiao chen,1 yuncheng wang1 and c.-c. jay kuo1 Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work.First,weintroducepopularwordembeddingmodelsanddiscussdesiredpropertiesofwordmodelsandevaluationmethods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evaluators. Keywords: Natural language processing, Word embedding, Word embedding evaluation Received 21 March 2019; Revised 9 June 2019 I. INTRODUCTION specific NLP tasks. They measure syntactic or semantic rela- tionships among words directly. Aggregate scores are given Word embedding is a real-valued vector representation of from testing the vectors in selected sets of query terms and words by embedding both semantic and syntactic meanings semantically related target words. One can further classify obtained from unlabeled large corpus. It is a powerful tool intrinsic evaluators into two types: (1) absolute evaluation, widely used in modern natural language processing (NLP) where embeddings are evaluated individually and only their tasks, including semantic analysis [1], information retrieval final scores are compared, and (2) comparative evaluation, [2], dependency parsing [3–5], question answering [6, 7], wherepeopleareaskedabouttheirpreferencesamongdif- and machine translation [6, 8, 9]. Learning a high-quality ferent word embeddings [15]. Since comparative intrinsic representation is extremely important for these tasks, yet the evaluators demand additional resources for subjective tests, question “what is a good word embedding model” remains they are not as popular as the absolute ones. an open problem. A good word representation should have certain good Various evaluation methods (or evaluators) have been properties. An ideal word evaluator should be able to ana- proposed to test the qualities of word embedding mod- lyze word embedding models from different perspectives. els. As introduced in [10], there are two main categories Yet, existing evaluators put emphasis on a certain aspect for evaluation methods – intrinsic and extrinsic evaluators. with or without consciousness. There is no unified eval- Extrinsic evaluators use word embeddings as input features uator that analyzes word embedding models comprehen- to a downstream task and measure changes in performance sively. Researchers have a hard time in selecting among metrics specific to that task. Examples include part-of- word embedding models because models do not always per- speech (POS) tagging [11], named-entity recognition (NER) form at the same level on different intrinsic evaluators. As a [12], sentiment analysis [13], and machine translation [14]. result, the gold standard for a good word embedding model Extrinsic evaluators are more computationally expensive, differs for different language tasks. In this work, we will and they may not be directly applicable. Intrinsic evalu- conduct correlation study between intrinsic evaluators and ators test the quality of a representation independent of language tasks so as to provide insights into various eval- uators and help people select word embedding models for specific language tasks. 1University of Southern California, Los Angeles, CA 90089, USA Although correlation between intrinsic and extrinsic 2University of California, Berkeley, Berkeley, CA 94720, USA evaluators was studied before [16, 17], this topic is never Corresponding author: thoroughly and seriously treated. For example, producing Bin Wang, models by changing the window size only does not happen Email: [email protected] ∗Equal Contribution ofteninreal-worldapplications,andtheconclusiondrawn Downloaded from https://www.cambridge.org/core. IP address: 170.106.34.90, on 25 Sep 2021 at 21:30:51, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/ATSIP.2019.12 1 2 bin wang, et al. in [16] might be biased. The work in [17]onlyfocusedon CBOW model trains two matrices: (1) an input word matrix Chinese characters with limited experiments. We provide denoted by V ∈ RN×|W|,wheretheith column of V is the N- the most comprehensive study and try to avoid the bias as dimensional embedded vector for input word vi, and (2) an much as possible in this work. output word matrix denoted by U ∈ R|W|×N ,wherethejth Therestofthepaperisorganizedasfollows.Popular row of U is the N-dimensional embedded vector for output word embedding models are reviewed in Section II.Prop- word uj. To embed input context words, we use the one-hot erties of good embedding models and intrinsic evaluators representation for each word initially, and apply VT to get are discussed in Section III. Representative performance the corresponding word vector embeddings of dimension metrics of intrinsic evaluation are presented in Section IV N. We apply UT to an input word vector to generate a score and the corresponding experimental results are offered in vector and use the softmax operation to convert a score vec- Section V. Representative performance metrics of extrin- tor into a probability vector of size W.Thisprocessisto sic evaluation are introduced in Section VI and the corre- yield a probability vector that matches the vector represen- sponding experimental results are provided in Section VII. tation of the output word. The CBOW model is obtained by We conduct consistency study on intrinsic and extrinsic minimizing the cross-entropy loss between the probability evaluators using correlation analysis in Section VIII.Finally, vector and the embedded vector of the output word. This is concluding remarks and future research directions are dis- achieved by minimizing the following objective function: cussed in Section IX. |W| ( ) =− T ˆ + ( T ˆ) J ui ui v log exp uj v ,(2) II. WORD EMBEDDING MODELS j=1 th where ui is the i row of matrix U and vˆ is the average of As extensive NLP downstream tasks emerge, the demand embedded input words. for word embedding is growing significantly. As a result, Initial values for matrices V and U are randomly lots of word embedding methods are proposed while some assigned. The dimension N of word embedding can vary of them share the same concept. We categorize the existing based on different application scenarios. Usually, it ranges word embedding methods based on their techniques. from 50 to 300 dimensions. After obtaining both matrices V or U, they can either be used solely or averaged to obtained A) Neural Network Language Model thefinalwordembeddingmatrix. The Neural Network Language Model (NNLM) [18]jointly The skip-gram model [19]predictsthesurroundingcon- learnsawordvectorrepresentationandastatisticallan- text words given a center word. It focuses on maximizing guage model with a feedforward neural network that con- probabilities of context words given a specific center word, tains a linear projection layer and a non-linear hidden layer. which can be written as An N-dimensional one-hot vector that represents the word P(wi−c, wi−c+1, ..., wi−1, wi+1, ..., wi+c−1, wi+c|wi).(3) is used as the input, where N is the size of the vocabu- lary. The input is first projected onto the projection layer. The optimization procedure is similar to that for the CBOW Afterwards, a softmax operation is used to compute the model but with a reversed order for context and center probability distribution over all words in the vocabulary. As words. a result of its non-linear hidden layers, the NNLM model The softmax function mentioned above is a method to is very computationally complex. To lower the complexity, generate probability distributions from word vectors. It can an NNLM is first trained using continuous word vectors be written as T learned from simple models. Then, another N-gram NNLM exp(v vw ) ( | ) = wc i P wc wi | | . (4) is trained from the word vectors. W ( T ) w=1 exp vwvwi B) Continuous-Bag-of-Words and skip-gram This softmax function is not the most efficient one since we must take a sum over all W words to normalize this Twoiteration-basedmethodswereproposedinthe function. Other functions that are more efficient include word2vec paper [19]. The first one is the Continuous-Bag- negative sampling and hierarchical softmax [20]. Negative of-Words (CBOW) model, which predicts the center word sampling is a method that maximizes the log probability of from its surrounding context. This model maximizes the the softmax model by only summing over a smaller subset probability of a word
Recommended publications
  • Arxiv:2002.06235V1
    Semantic Relatedness and Taxonomic Word Embeddings Magdalena Kacmajor John D. Kelleher Innovation Exchange ADAPT Research Centre IBM Ireland Technological University Dublin [email protected] [email protected] Filip Klubickaˇ Alfredo Maldonado ADAPT Research Centre Underwriters Laboratories Technological University Dublin [email protected] [email protected] Abstract This paper1 connects a series of papers dealing with taxonomic word embeddings. It begins by noting that there are different types of semantic relatedness and that different lexical representa- tions encode different forms of relatedness. A particularly important distinction within semantic relatedness is that of thematic versus taxonomic relatedness. Next, we present a number of ex- periments that analyse taxonomic embeddings that have been trained on a synthetic corpus that has been generated via a random walk over a taxonomy. These experiments demonstrate how the properties of the synthetic corpus, such as the percentage of rare words, are affected by the shape of the knowledge graph the corpus is generated from. Finally, we explore the interactions between the relative sizes of natural and synthetic corpora on the performance of embeddings when taxonomic and thematic embeddings are combined. 1 Introduction Deep learning has revolutionised natural language processing over the last decade. A key enabler of deep learning for natural language processing has been the development of word embeddings. One reason for this is that deep learning intrinsically involves the use of neural network models and these models only work with numeric inputs. Consequently, applying deep learning to natural language processing first requires developing a numeric representation for words. Word embeddings provide a way of creating numeric representations that have proven to have a number of advantages over traditional numeric rep- resentations of language.
    [Show full text]
  • Combining Word Embeddings with Semantic Resources
    AutoExtend: Combining Word Embeddings with Semantic Resources Sascha Rothe∗ LMU Munich Hinrich Schutze¨ ∗ LMU Munich We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guar- antees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks. 1. Introduction Unsupervised methods for learning word embeddings are widely used in natural lan- guage processing (NLP). The only data these methods need as input are very large corpora. However, in addition to corpora, there are many other resources that are undoubtedly useful in NLP, including lexical resources like WordNet and Wiktionary and knowledge bases like Wikipedia and Freebase. We will simply refer to these as resources. In this article, we present AutoExtend, a method for enriching these valuable resources with embeddings for non-word objects they describe; for example, Auto- Extend enriches WordNet with embeddings for synsets. The word embeddings and the new non-word embeddings live in the same vector space. Many NLP applications benefit if non-word objects described by resources—such as synsets in WordNet—are also available as embeddings.
    [Show full text]
  • Word-Like Character N-Gram Embedding
    Word-like character n-gram embedding Geewook Kim and Kazuki Fukui and Hidetoshi Shimodaira Department of Systems Science, Graduate School of Informatics, Kyoto University Mathematical Statistics Team, RIKEN Center for Advanced Intelligence Project fgeewook, [email protected], [email protected] Abstract Table 1: Top-10 2-grams in Sina Weibo and 4-grams in We propose a new word embedding method Japanese Twitter (Experiment 1). Words are indicated called word-like character n-gram embed- by boldface and space characters are marked by . ding, which learns distributed representations FNE WNE (Proposed) Chinese Japanese Chinese Japanese of words by embedding word-like character n- 1 ][ wwww 自己 フォロー grams. Our method is an extension of recently 2 。␣ !!!! 。␣ ありがと proposed segmentation-free word embedding, 3 !␣ ありがと ][ wwww 4 .. りがとう 一个 !!!! which directly embeds frequent character n- 5 ]␣ ございま 微博 めっちゃ grams from a raw corpus. However, its n-gram 6 。。 うござい 什么 んだけど vocabulary tends to contain too many non- 7 ,我 とうござ 可以 うござい 8 !! ざいます 没有 line word n-grams. We solved this problem by in- 9 ␣我 がとうご 吗? 2018 troducing an idea of expected word frequency. 10 了, ください 哈哈 じゃない Compared to the previously proposed meth- ods, our method can embed more words, along tion tools are used to determine word boundaries with the words that are not included in a given in the raw corpus. However, these segmenters re- basic word dictionary. Since our method does quire rich dictionaries for accurate segmentation, not rely on word segmentation with rich word which are expensive to prepare and not always dictionaries, it is especially effective when the available.
    [Show full text]
  • Learned in Speech Recognition: Contextual Acoustic Word Embeddings
    LEARNED IN SPEECH RECOGNITION: CONTEXTUAL ACOUSTIC WORD EMBEDDINGS Shruti Palaskar∗, Vikas Raunak∗ and Florian Metze Carnegie Mellon University, Pittsburgh, PA, U.S.A. fspalaska j vraunak j fmetze [email protected] ABSTRACT model [10, 11, 12] trained for direct Acoustic-to-Word (A2W) speech recognition [13]. Using this model, we jointly learn to End-to-end acoustic-to-word speech recognition models have re- automatically segment and classify input speech into individual cently gained popularity because they are easy to train, scale well to words, hence getting rid of the problem of chunking or requiring large amounts of training data, and do not require a lexicon. In addi- pre-defined word boundaries. As our A2W model is trained at the tion, word models may also be easier to integrate with downstream utterance level, we show that we can not only learn acoustic word tasks such as spoken language understanding, because inference embeddings, but also learn them in the proper context of their con- (search) is much simplified compared to phoneme, character or any taining sentence. We also evaluate our contextual acoustic word other sort of sub-word units. In this paper, we describe methods embeddings on a spoken language understanding task, demonstrat- to construct contextual acoustic word embeddings directly from a ing that they can be useful in non-transcription downstream tasks. supervised sequence-to-sequence acoustic-to-word speech recog- Our main contributions in this paper are the following: nition model using the learned attention distribution. On a suite 1. We demonstrate the usability of attention not only for aligning of 16 standard sentence evaluation tasks, our embeddings show words to acoustic frames without any forced alignment but also for competitive performance against a word2vec model trained on the constructing Contextual Acoustic Word Embeddings (CAWE).
    [Show full text]
  • Knowledge-Powered Deep Learning for Word Embedding
    Knowledge-Powered Deep Learning for Word Embedding Jiang Bian, Bin Gao, and Tie-Yan Liu Microsoft Research {jibian,bingao,tyliu}@microsoft.com Abstract. The basis of applying deep learning to solve natural language process- ing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually con- tains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well- defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. There- fore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embed- dings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reason- ing task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word em- bedding. 1 Introduction With rapid development of deep learning techniques in recent years, it has drawn in- creasing attention to train complex and deep models on large amounts of data, in order to solve a wide range of text mining and natural language processing (NLP) tasks [4, 1, 8, 13, 19, 20].
    [Show full text]
  • Local Homology of Word Embeddings
    Local Homology of Word Embeddings Tadas Temcinasˇ 1 Abstract Intuitively, stratification is a decomposition of a topologi- Topological data analysis (TDA) has been widely cal space into manifold-like pieces. When thinking about used to make progress on a number of problems. stratification learning and word embeddings, it seems intu- However, it seems that TDA application in natural itive that vectors of words corresponding to the same broad language processing (NLP) is at its infancy. In topic would constitute a structure, which we might hope this paper we try to bridge the gap by arguing why to be a manifold. Hence, for example, by looking at the TDA tools are a natural choice when it comes to intersections between those manifolds or singularities on analysing word embedding data. We describe the manifolds (both of which can be recovered using local a parallelisable unsupervised learning algorithm homology-based algorithms (Nanda, 2017)) one might hope based on local homology of datapoints and show to find vectors of homonyms like ‘bank’ (which can mean some experimental results on word embedding either a river bank, or a financial institution). This, in turn, data. We see that local homology of datapoints in has potential to help solve the word sense disambiguation word embedding data contains some information (WSD) problem in NLP, which is pinning down a particular that can potentially be used to solve the word meaning of a word used in a sentence when the word has sense disambiguation problem. multiple meanings. In this work we present a clustering algorithm based on lo- cal homology, which is more relaxed1 that the stratification- 1.
    [Show full text]
  • Word Embedding, Sense Embedding and Their Application to Word Sense Induction
    Word Embeddings, Sense Embeddings and their Application to Word Sense Induction Linfeng Song The University of Rochester Computer Science Department Rochester, NY 14627 Area Paper April 2016 Abstract This paper investigates the cutting-edge techniques for word embedding, sense embedding, and our evaluation results on large-scale datasets. Word embedding refers to a kind of methods that learn a distributed dense vector for each word in a vocabulary. Traditional word embedding methods first obtain the co-occurrence matrix then perform dimension reduction with PCA. Recent methods use neural language models that directly learn word vectors by predicting the context words of the target word. Moving one step forward, sense embedding learns a distributed vector for each sense of a word. They either define a sense as a cluster of contexts where the target word appears or define a sense based on a sense inventory. To evaluate the performance of the state-of-the-art sense embedding methods, I first compare them on the dominant word similarity datasets, then compare them on my experimental settings. In addition, I show that sense embedding is applicable to the task of word sense induction (WSI). Actually we are the first to show that sense embedding methods are competitive on WSI by building sense-embedding-based systems that demonstrate highly competitive performances on the SemEval 2010 WSI shared task. Finally, I propose several possible future research directions on word embedding and sense embedding. The University of Rochester Computer Science Department supported this work. Contents 1 Introduction 3 2 Word Embedding 5 2.1 Skip-gramModel................................
    [Show full text]
  • Review on Parse Tree Generation in Natural Language Processing
    International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: [email protected], [email protected] ISSN 2319 - 4847 Special Issue for National Conference On Recent Advances in Technology and Management for Integrated Growth 2013 (RATMIG 2013) Review on Parse Tree Generation in Natural Language Processing Manoj K. Vairalkar1 1 Assistant Professor, Department of Information Technology, Gurunanak Institute of Engineering & Technology Nagpur University, Maharashtra, India [email protected] ABSTRACT Natural Language Processing is a field of computer science concerned with the interactions between computers and human languagesThe syntax and semantics plays very important role in Natural Language Processing. Processing natural language such as English has always been one of the central research issues of artificial intelligence. The concept of parsing is very important. In this, the sentence gets parsed into Noun Phrase and Verb phrase modules. If there are further decomposition then these modules further gets divided. In this way, it helps to learn the meaning of word. Keywords:Parse tree, Parser, syntax, semantics etc 1.Introduction The ultimate objective of natural language processing is to allow people to communicate with computers in much the same way they communicate with each other. Natural language processing removes one of the key obstacles that keep some people from using computers. More specifically, natural language processing facilitates access to a database or a knowledge base, provides a friendly user interface, facilitates language translation and conversion, and increases user productivity by supporting English-like input. To parse a sentence, first upon the syntax and semantics of the sentence comes into consideration.
    [Show full text]
  • Building a Treebank for French
    Building a treebank for French £ £¥ Anne Abeillé£ , Lionel Clément , Alexandra Kinyon ¥ £ TALaNa, Université Paris 7 University of Pennsylvania 75251 Paris cedex 05 Philadelphia FRANCE USA abeille, clement, [email protected] Abstract Very few gold standard annotated corpora are currently available for French. We present an ongoing project to build a reference treebank for French starting with a tagged newspaper corpus of 1 Million words (Abeillé et al., 1998), (Abeillé and Clément, 1999). Similarly to the Penn TreeBank (Marcus et al., 1993), we distinguish an automatic parsing phase followed by a second phase of systematic manual validation and correction. Similarly to the Prague treebank (Hajicova et al., 1998), we rely on several types of morphosyntactic and syntactic annotations for which we define extensive guidelines. Our goal is to provide a theory neutral, surface oriented, error free treebank for French. Similarly to the Negra project (Brants et al., 1999), we annotate both constituents and functional relations. 1. The tagged corpus pronoun (= him ) or a weak clitic pronoun (= to him or to As reported in (Abeillé and Clément, 1999), we present her), plus can either be a negative adverb (= not any more) the general methodology, the automatic tagging phase, the or a simple adverb (= more). Inflectional morphology also human validation phase and the final state of the tagged has to be annotated since morphological endings are impor- corpus. tant for gathering constituants (based on agreement marks) and also because lots of forms in French are ambiguous 1.1. Methodology with respect to mode, person, number or gender. For exam- 1.1.1.
    [Show full text]
  • Learning Word Meta-Embeddings by Autoencoding
    Learning Word Meta-Embeddings by Autoencoding Cong Bao Danushka Bollegala Department of Computer Science Department of Computer Science University of Liverpool University of Liverpool [email protected] [email protected] Abstract Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics. Therefore, we believe that it is important to combine the existing word em- beddings to produce more accurate and complete meta-embeddings of words. We model the meta-embedding learning problem as an autoencoding problem, where we would like to learn a meta-embedding space that can accurately reconstruct all source embeddings simultaneously. Thereby, the meta-embedding space is enforced to capture complementary information in differ- ent source embeddings via a coherent common embedding space. We propose three flavours of autoencoded meta-embeddings motivated by different requirements that must be satisfied by a meta-embedding. Our experimental results on a series of benchmark evaluations show that the proposed autoencoded meta-embeddings outperform the existing state-of-the-art meta- embeddings in multiple tasks. 1 Introduction Representing the meanings of words is a fundamental task in Natural Language Processing (NLP). A popular approach to represent the meaning of a word is to embed it in some fixed-dimensional
    [Show full text]
  • A Comparison of Word Embeddings and N-Gram Models for Dbpedia Type and Invalid Entity Detection †
    information Article A Comparison of Word Embeddings and N-gram Models for DBpedia Type and Invalid Entity Detection † Hanqing Zhou *, Amal Zouaq and Diana Inkpen School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa ON K1N 6N5, Canada; [email protected] (A.Z.); [email protected] (D.I.) * Correspondence: [email protected]; Tel.: +1-613-562-5800 † This paper is an extended version of our conference paper: Hanqing Zhou, Amal Zouaq, and Diana Inkpen. DBpedia Entity Type Detection using Entity Embeddings and N-Gram Models. In Proceedings of the International Conference on Knowledge Engineering and Semantic Web (KESW 2017), Szczecin, Poland, 8–10 November 2017, pp. 309–322. Received: 6 November 2018; Accepted: 20 December 2018; Published: 25 December 2018 Abstract: This article presents and evaluates a method for the detection of DBpedia types and entities that can be used for knowledge base completion and maintenance. This method compares entity embeddings with traditional N-gram models coupled with clustering and classification. We tackle two challenges: (a) the detection of entity types, which can be used to detect invalid DBpedia types and assign DBpedia types for type-less entities; and (b) the detection of invalid entities in the resource description of a DBpedia entity. Our results show that entity embeddings outperform n-gram models for type and entity detection and can contribute to the improvement of DBpedia’s quality, maintenance, and evolution. Keywords: semantic web; DBpedia; entity embedding; n-grams; type identification; entity identification; data mining; machine learning 1. Introduction The Semantic Web is defined by Berners-Lee et al.
    [Show full text]
  • Sentiment Analysis for Multilingual Corpora
    Sentiment Analysis for Multilingual Corpora Svitlana Galeshchuk Julien Jourdan Ju Qiu Governance Analytics, PSL Research University / Governance Analytics, PSL Research University / University Paris Dauphine PSL Research University / University Paris Dauphine Place du Marechal University Paris Dauphine Place du Marechal de Lattre de Tassigny, Place du Marechal de Lattre de Tassigny, 75016 Paris, France de Lattre de Tassigny, 75016 Paris, France [email protected] 75016 Paris, France [email protected] [email protected] Abstract for different languages without polarity dictionar- ies using relatively small datasets. The paper presents a generic approach to the supervised sentiment analysis of social media We address the challenges for sentiment analy- content in foreign languages. The method pro- sis in Slavic languages by using the averaged vec- poses translating documents from the origi- tors for each word in a document translated in nal language to English with Google’s Neural English. The vectors derive from the Word2Vec Translation Model. The resulted texts are then model pre-trained on Google news. converted to vectors by averaging the vectorial Researchers tend to use the language-specific representation of words derived from a pre- pre-trained Word2Vec models (e.g., Word2Vec trained Word2Vec English model. Testing the approach with several machine learning meth- model pre-trained on Wikipedia corpus in Greek, ods on Polish, Slovenian and Croatian Twitter Swedish, etc.). On the contrary, we propose ben- corpora returns up to 86 % of classification ac- efiting from Google’s Neural Translation Model curacy on the out-of-sample data. translating the texts from other languages to En- glish.
    [Show full text]