Go Simple and Pre-Train on Domain-Specific Corpora: on the Role of Training Data for Text Classification

Go Simple and Pre-Train on Domain-Specific Corpora: on the Role of Training Data for Text Classification

Go Simple and Pre-Train on Domain-Specific Corpora: On the Role of Training Data for Text Classification Aleksandra Edwardsy Jose Camacho-Colladosy Hel´ ene` de Ribaupierrey Alun Preecez ySchool of Computer Science and Informatics, Cardiff University, United Kingdom zCrime and Security Research Institute, Cardiff University, United Kingdom fedwardsai,camachocolladosj,deribaupierreh,[email protected] Abstract Pre-trained language models provide the foundations for state-of-the-art performance across a wide range of natural language processing tasks, including text classification. However, most classification datasets assume a large amount labeled data, which is commonly not the case in practical settings. In particular, in this paper we compare the performance of a light-weight lin- ear classifier based on word embeddings, i.e., fastText (Joulin et al., 2017), versus a pre-trained language model, i.e., BERT (Devlin et al., 2019), across a wide range of datasets and classifica- tion tasks. In general, results show the importance of domain-specific unlabeled data, both in the form of word embeddings or language models. As for the comparison, BERT outperforms all baselines in standard datasets with large training sets. However, in settings with small training datasets a simple method like fastText coupled with domain-specific word embeddings performs equally well or better than BERT, even when pre-trained on domain-specific data. 1 Introduction Language models pre-trained on large amounts of text corpora form the foundation of today’s NLP (Gururangan et al., 2020; Rogers et al., 2020). They have proved to provide state-of-the-art performance against most standard NLP benchmarks (Wang et al., 2019a; Wang et al., 2019b). However, these models require large computational resources that are not always available and have important environment implications (Strubell et al., 2019). Moreover, there is limited research in the applicability of pre-trained models in classification tasks with small amount of labelled data. Some related studies (Lee et al., 2020; Nguyen et al., 2020; Huang et al., 2019; Alsentzer et al., 2019) investigate whether it is helpful to tailor a pre-trained model to the domain while others (Sun et al., 2019; Chronopoulou et al., 2019; Radford et al., 2018) analyse methods for fine-tuning BERT to a given task. However, these studies perform evaluation on a limited range of datasets and classification models and do not consider scenarios with limited amounts of training data. In particular, this paper aims to estimate the role of labeled and unlabeled data for supervised text classification. Our study is similar to Gururangan et al. (2020) where they investigate whether it is still helpful to tailor a pre-trained model to the domain of a target task. In this paper, however, we focus our evaluation on text classification and compare different types of classifiers on different domains (social media, news and reviews). Unlike other tasks such as natural language inference or question answering that may require a subtle understanding, feature-based linear models are still considered to be competitive in text classification (Kowsari et al., 2019). However, to the best of our knowledge there has not been an extensive comparison between such methods and newer pre-trained language models. To this end, we compare the light-weight linear classification model fastText (Joulin et al., 2017), coupled with generic and corpus-specific word embeddings, and the pre-trained language model BERT (Devlin et al., 2019), trained on generic data and domain-specific data. Specifically, we analyze the effect of training size over the performance of the classifiers in settings where such training data is limited, both in few-shot This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/. 5522 Proceedings of the 28th International Conference on Computational Linguistics, pages 5522–5529 Barcelona, Spain (Online), December 8-13, 2020 scenarios with a balanced set and keeping the original distributions. In both cases, our results show that a large pre-trained language model may not provide significant gains over a linear model that leverage word embeddings, especially when these belong to the given domain. 2 Supervised Text Classification Given a sentence or a document, the task of text classification consists of associating it with a label from a pre-defined set. For example, in a simplified sentiment analysis setting the pre-defined labels could be positive, negative and neutral. In the following we describe standard linear methods and explain recent techniques based on neural models that we compare in our quantitative evaluation. 2.1 Supervised machine learning models Linear models. Linear models such as SVMs or logistic regression coupled with frequency-based hand- crafted features have been traditionally used for text classification. Despite their simplicity, they are considered a strong baseline for many text classification tasks (Joachims, 1998; McCallum et al., 1998; Fan et al., 2008), even more recently on noisy corpora such as social media text (C¸ oltekin¨ and Rama, 2018; Mohammad et al., 2018). In general, however, these methods tend to struggle with OOV (Out- Of-Vocabulary) words, fine-grained distinctions and unbalanced datasets. FastText (Joulin et al., 2017), which is the model evaluated in this paper, partially addresses these issues by integrating a linear model with a rank constraint, allowing sharing parameters among features and classes, and integrates word embeddings that are then averaged into a text representation. Neural models. Neural models can learn non-linear and complex relationships which makes them a preferable method for many NLP tasks such as sentiment analysis or question answering (Sun et al., 2019). In particular, LSTMs, sometimes in combination with CNNs for text classification (Xiao and Cho, 2016; Pilehvar et al., 2017), enable capturing long-range dependencies in a sequential manner where data is read from only one direction (referred to as the ‘unidirectionality constraint’). Recent state-of-the-art language models, such as BERT (Devlin et al., 2019), overcome the unidirectionality constraint by using transformer-based masked language models to learn pre-trained deep bidirectional representations. These pre-trained models leverage generic knowledge on large unlabeled corpora that can then be fine-tuned on the specific task by using the pre-trained parameters. BERT, which is the pre- trained language model tested in this paper, has been proved to provide state-of-the-art results in most standard NLP benchmarks (Wang et al., 2019b), including text classification. 2.2 Pre-trained word embeddings and language models Most state-of-the-art NLP models nowadays use unlabeled data in addition to labeled data to improve generalization (Goldberg, 2016). This comes in the form of word embeddings for fastText and a pre- trained language model for BERT. Word embeddings. Word embeddings represent words in a vector space and are generally learned from shallow neural networks trained on text corpora, with Word2Vec (Mikolov et al., 2013) being one of the most popular and efficient approaches. A more recent model based on the Word2Vec architecture is fastText (Bojanowski et al., 2017), where words are additionally represented as the sum of character n-gram vectors. This allowed building vectors for rare words, misspelt words or concatenations of words. Language models. A limitation to the word embedding models described above is that they produce a single vector of a word despite the context in which it appears. In contrast, contextualized embeddings such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) produce word representations that are dynamically informed by the words around them. The main drawback of these models, however, is that they are computationally very demanding, as they are generally based on large transformer-based language models (Strubell et al., 2019). 3 Experimental Setting Datasets. For our experiments we selected a suite of datasets with different domains and nature. These are: SemEval 2016 task on sentiment analysis (Nakov et al., 2019), SemEval 2018 task on emoji predic- 5523 tion (Barbieri et al., 2018), AG News (Zhang et al., 2015), Newsgroups (Lang, 1995) and IMDB (Maas et al., 2011). The main features and statistics of each dataset are summarized in Table 1.1 Dataset Task Domain Type Avg tokens Labels # Train # Dev # Test SemEval-16 (SA) Sentiment analysis Twitter Sentence 20 3 5,937 1,386 20,806 SemEval-18 (EP) Emoji prediction Twitter Sentence 12 20 500,000 1,000 49,998 AG News Topic categorization News Sentence 31 4 114,828 624 5,612 20 Newsgroups Topic categorization Newsgroups Document 285 20 11,231 748 6,728 IMDB Polarity detection Movie reviews Document 231 2 28,000 2,560 23,041 Table 1: Overview of the classification datasets used in our experiments. Comparison models. As mentioned in Section 2, our evaluation is focused on fastText (Joulin et al., 2017, FT) and BERT (Devlin et al., 2019). For completeness we include a simple baseline based on frequency-based features and a suite of classification algorithms available in the Scikit-Learn library (Pedregosa et al., 2011), namely Gaussian Naive Bayes (GNB), Logistic Regression and Support Vector Machines (SVM). Of the three, the best results were achieved using Logistic Regression, which is the model we include in this paper as a baseline for our experiments. Training. As pre-trained word embeddings we downloaded 300-dimensional fastText embeddings trained on Common Crawl (Bojanowski et al., 2017). In order to learn domain-specific word embed- ding models we used the corresponding training sets for each dataset, except for the Twitter datasets where we leveraged an existing collection of unlabeled tweets from October 2015 to July 2018 to train 300-dimensional fastText embeddings (Camacho Collados et al., 2020). Word embeddings are then fed as input to a fastText classifier where we used default parameters and softmax as the loss function.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us