
ETNLP: a visual-aided systematic approach to select pre-trained embeddings for a downstream task Xuan-Son Vu1, Thanh Vu2, Son N. Tran3, Lili Jiang1 1Umea˚ University, Sweden 2The Australian E-Health Research Centre, CSIRO, Australia 3 The University of Tasmania, Australia; {sonvx, lili.jiang}@cs.umu.se [email protected], [email protected] ; Abstract ing (Bansal et al., 2014), topic modeling (Nguyen et al., 2015), and document classification (Taddy, Given many recent advanced embedding mod- 2015; Vu et al., 2018b). Each word is associated els, selecting pre-trained word embedding (a.k.a., word representation) models best fit for with a single vector leading to a challenge on us- a specific downstream task is non-trivial. In ing the vector across linguistic contexts (Peters this paper, we propose a systematic approach, et al., 2018). To handle the problem, recently, con- called ETNLP, for extracting, evaluating, and textual embeddings (e.g., ELMO of Peters et al. visualizing multiple sets of pre-trained word (2018), BERT of Devlin et al.(2018)) have been embeddings to determine which embeddings proposed and help existing models achieve new should be used in a downstream task. state-of-the-art results on many NLP tasks. Dif- We demonstrate the effectiveness of the pro- ferent from non-contextual embeddings, ELMO posed approach on our pre-trained word em- and BERT can capture different latent syntactic- bedding models in Vietnamese to select which semantic information of the same word based on models are suitable for a named entity recogni- its contextual uses. Therefore, for completeness, tion (NER) task. Specifically, we create a large Vietnamese word analogy list to evaluate and in this paper, we incorporate both classical em- select the pre-trained embedding models for beddings (i.e., Word2Vec, fastText) and contextual the task. We then utilize the selected embed- embeddings (i.e., ELMO, BERT) to evaluate their dings for the NER task and achieve the new performances on NLP downstream tasks. state-of-the-art results on the task benchmark Given the fact that there are many different dataset. We also apply the approach to another types of word embedding models, we argue that downstream task of privacy-guaranteed em- having a systematic pipeline to evaluate, extract, bedding selection, and show that it helps users quickly select the most suitable embeddings. and visualize word embeddings for a downstream In addition, we create an open-source system NLP task, is important but non-trivial. However, using the proposed systematic approach to fa- to our knowledge, there is no single comprehen- cilitate similar studies on other NLP tasks. The sive pipeline (or toolkit) which can perform all source code and data are available at https: the tasks of evaluation, extraction, and visualiza- //github.com/vietnlp/etnlp. tion. For example, the recent framework called flair (Akbik et al., 2018) is used for training and arXiv:1903.04433v2 [cs.CL] 3 Aug 2019 1 Introduction stacking multiple embeddings but does not pro- Word embedding, also known as word represen- vide the whole pipeline of extraction, evaluation tation, represents a word as a vector capturing and visualization. both syntactic and semantic information, so that In this paper, we propose ETNLP, a system- the words with similar meanings should have atic pipeline to extract, evaluate and visualize similar vectors (Levy and Goldberg, 2014). Al- the pre-trained embeddings on a specific down- though, the classical embedding models, such as stream NLP task (hereafter ETNLP pipeline). The Word2Vec (Mikolov et al., 2013), GloVe (Pen- ETNLP pipeline consists of three main compo- nington et al., 2014), fastText (Bojanowski et al., nents which are extractor, evaluator, and visual- 2017), have been shown to help improve the per- izer. Based on the vocabulary set within a down- formance of existing models in a variety of Nat- stream task, the extractor will extract a subset of ural Language Processing (NLP) tasks like pars- word embeddings for the set to run evaluation and visualization. The results from both evaluator The side-by-side visualization helps users com- and visualizer will help researchers quickly select pare the qualities of the word similarity list be- which embedding models should be used for the tween multiple embeddings (see figure5). It al- downstream NLP task. On the one hand, the eval- lows researchers to “zoom-out” and see at the uator gives a concrete comparison between multi- overview level what is the main difference be- ple sets of word embeddings. While, on the other tween multiple embeddings. Moreover, it can vi- hand, the visualizer will give the sense on what sualize large embeddings up to the memory size type of information each set of embeddings pre- of the running system. Regarding implementation, serves given the constraint of the vocabulary size we implemented this visualization from scratch of the downstream task. We detail the three main running on a lightweight webserver called Flask components as follows. (flask.pocoo.org). • Extractor extracts a subset of pre-trained For the interactive visualization, it helps re- embeddings based on the vocabulary size of a searchers “zoom-in” each embedding space to ex- downstream task. Moreover, given multiple sets of plore how each word is similar to the others. pre-trained embeddings, how do we get the ad- To do this, the well-known Embedding Projector vantage from a few or all of them? For instance, (projector.tensorflow.org) is employed if people want to use the character embedding to to explore the embedding space interactively. Un- handle the out-of-vocabulary (OOV) problem in like the side-by-side visualization, this interactive Word2Vec model, they have to implement their visualization can only visualize up to a certain own extractor to combine two different sets of amount of embedding vectors as long as the ten- embeddings. It is more complicated when they sor graph is less than 2GB. This is a big limitation want to evaluate the performance of either each of the interactive visualization approach, which set of embeddings separately or the combination we plan to improve in the near future. Finally, it of the two sets. The provided extractor module in is worth to mention that the visualization module ETNLP will fulfill those needs seamlessly to elab- is dynamic and it does not require to change any orate this process in NLP applications. codes when users want to visualize multiple pre- trained word embeddings. • Evaluator evaluates the pre-trained embed- To demonstrate the effectiveness of the ETNLP dings for a downstream task. Specifically, given pipeline, we employ it to a use case in Vietnamese. multiple sets of pre-trained embeddings, how do Evaluating pre-trained embeddings in Vietnamese we choose the embeddings which will potentially is a challenge as there is no publicly available work best for a specific downstream task (e.g., large1 lexical resource similar to the word anal- NER)? Mikolov et al.(2013) presented a large ogy list in English to evaluate the performance of benchmark for embedding evaluation based on a pre-trained embeddings. Moreover, different from series of analogies. However, the benchmark is English where all word analogy records consist of only for English and there is no publicly available a single syllable in one record (e.g., grandfather | large benchmark for low resource languages like grandmother | king | queen), in Vietnamese, there Vietnamese (Vu et al., 2014). Therefore, we pro- are many cases where only words formulated by pose a new evaluation metric for the word analogy multiple syllables can represent a word analogy task in Section3. record (e.g., ông nội | bà ngoại | vua | nữ_hoàng). • Visualizer visualizes the embedding space of We propose a large word analogy list in Viet- multiple sets of word embeddings. When having namese which can handle the problems. Having a new set of word embeddings, we need to get that word analogy list constructed, we utilize dif- a sense of what kinds of information (e.g., syn- ferent embedding models, namely Word2Vec, fast- tactic or semantic) the model does preserve. We Text, ELMO and BERT on Vietnamese Wikipedia specifically want to get samples from the embed- data to generate different sets of word embeddings. ding set to see what is the semantic similarity be- We then utilize the word analogy list to select tween different words. To fulfill this requirement, suitable sets of embeddings for the named entity we design two different visualization strategies to recognition (NER) task in Vietnamese. We achieve explore the embedding space: (1) side-by-side vi- 1There are a couple of available datasets (Nguyen et al., sualization and (2) interactive visualization. 2018b). But the datasets are small containing only 400 words. 2 Vocab File of Extracted Embeddings the new state-of-the-art results on VLSP 2016 , a 2.1 Extractor Emb#1 Downstream Task for Target NLP Tasks Vietnamese benchmark dataset for the NER task. S S 2.2. Evaluator Here are our key contributions in this work: Emb#2 Evaluation Results 1. Pre-processing 2.3. Visualizer Visualization of Emb#n • Propose a systematic pipeline (ETNLP) to Embedding Space evaluate, extract, and visualize multiple sets of Figure 1: General process of the ETNLP pipeline where word embeddings on a downstream task. S is the set of extracted embeddings for Evaluation and Visualization of multiple embeddings on a downstream • Release a large word analogy list in Viet- NLP task. namese for evaluating multiple word embeddings. • Train and release multiple sets of word em- the word appears in the training corpus to gener- beddings for NLP tasks in Vietnamese, wherein, ate embeddings for each of its occurrences. Then their effectiveness is verified through new state- the final embedding vector is the average of all its of-the-art results on a NER task in Vietnamese.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-