Adapt Language Models to Domains and Tasks

Adapt Language Models to Domains and Tasks

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks Suchin Gururangany Ana Marasovic´y} Swabha Swayamdiptay Kyle Loy Iz Beltagyy Doug Downeyy Noah A. Smithy} yAllen Institute for Artificial Intelligence, Seattle, WA, USA }Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA fsuching,anam,swabhas,kylel,beltagy,dougd,[email protected] Abstract Language models pretrained on text from a wide variety of sources form the foundation of today’s NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedi- cal and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in- domain (domain-adaptive pretraining) leads to performance gains, under both high- and Figure 1: An illustration of data distributions. Task low-resource settings. Moreover, adapting data is comprised of an observable task distribution, to the task’s unlabeled data (task-adaptive usually non-randomly sampled from a wider distribu- pretraining) improves performance even after tion (light grey ellipsis) within an even larger target do- domain-adaptive pretraining. Finally, we show main, which is not necessarily one of the domains in- that adapting to a task corpus augmented us- cluded in the original LM pretraining domain – though ing simple data selection strategies is an effec- overlap is possible. We explore the benefits of contin- tive alternative, especially when resources for ued pretraining on data from the task distribution and domain-adaptive pretraining might be unavail- the domain distribution. able. Overall, we consistently find that multi- phase adaptive pretraining offers large gains in task performance. separate pretrained models for specific domains? While some studies have shown the benefit of 1 Introduction continued pretraining on domain-specific unlabeled Today’s pretrained language models are trained on data (e.g., Lee et al., 2019), these studies only con- massive, heterogeneous corpora (Raffel et al., 2019; sider a single domain at a time and use a language Yang et al., 2019). For instance, ROBERTA (Liu model that is pretrained on a smaller and less di- et al., 2019) was trained on over 160GB of uncom- verse corpus than the most recent language mod- arXiv:2004.10964v3 [cs.CL] 5 May 2020 pressed text, with sources ranging from English- els. Moreover, it is not known how the benefit of language encyclopedic and news articles, to literary continued pretraining may vary with factors like works and web content. Representations learned the amount of available labeled task data, or the by such models achieve strong performance across proximity of the target domain to the original pre- many tasks with datasets of varying sizes drawn training corpus (see Figure1). from a variety of sources (e.g., Wang et al., 2018, We address this question for one such high- 2019). This leads us to ask whether a tasks textual performing model, ROBERTA (Liu et al., 2019) domain—a term typically used to denote a distribu- (x2). We consider four domains (biomedical and tion over language characterizing a given topic or computer science publications, news, and reviews; genre (such as “science” or “mystery novels”)—is x3) and eight classification tasks (two in each do- still relevant. Do the latest large pretrained mod- main). For targets that are not already in-domain els work universally or is it still helpful to build for ROBERTA, our experiments show that contin- ued pretraining on the domain (which we refer to as pora. The word (or wordpiece; Wu et al. 2016) domain-adaptive pretraining or DAPT) consistently representations learned in the pretrained model are improves performance on tasks from the target do- then reused in supervised training for a downstream main, in both high- and low-resource settings. task, with optional updates (fine-tuning) of the rep- Above, we consider domains defined around gen- resentations and network from the first stage. res and forums, but it is also possible to induce a One such pretrained LM is ROBERTA (Liu domain from a given corpus used for a task, such et al., 2019), which uses the same transformer- as the one used in supervised training of a model. based architecture (Vaswani et al., 2017) as its This raises the question of whether pretraining on predecessor, BERT (Devlin et al., 2019). It is a corpus more directly tied to the task can fur- trained with a masked language modeling objec- ther improve performance. We study how domain- tive (i.e., cross-entropy loss on predicting randomly adaptive pretraining compares to task-adaptive pre- masked tokens). The unlabeled pretraining corpus training, or TAPT, on a smaller but directly task- for ROBERTA contains over 160 GB of uncom- relevant corpus: the unlabeled task dataset (x4), pressed raw text from different English-language drawn from the task distribution. Task-adaptive corpora (see Appendix xA.1). ROBERTA attains pretraining has been shown effective (Howard and better performance on an assortment of tasks than Ruder, 2018), but is not typically used with the its predecessors, making it our baseline of choice. most recent models. We find that TAPT provides Although ROBERTA’s pretraining corpus is de- a large performance boost for ROBERTA, with or rived from multiple sources, it has not yet been without domain-adaptive pretraining. established if these sources are diverse enough to Finally, we show that the benefits from task- generalize to most of the variation in the English adaptive pretraining increase when we have addi- language. In other words, we would like to un- tional unlabeled data from the task distribution that derstand what is out of ROBERTA’s domain. To- has been manually curated by task designers or an- wards this end, we explore further adaptation by notators. Inspired by this success, we propose ways continued pretraining of this large LM into two to automatically select additional task-relevant un- categories of unlabeled data: (i) large corpora of labeled text, and show how this improves perfor- domain-specific text (x3), and (ii) available unla- mance in certain low-resource cases (x5). On all beled data associated with a given task (x4). tasks, our results using adaptive pretraining tech- niques are competitive with the state of the art. 3 Domain-Adaptive Pretraining In summary, our contributions include: Our approach to domain-adaptive pretraining • a thorough analysis of domain- and task- (DAPT) is straightforward—we continue pretrain- adaptive pretraining across four domains and ing ROBERTA on a large corpus of unlabeled eight tasks, spanning low- and high-resource domain-specific text. The four domains we focus settings; on are biomedical (BIOMED) papers, computer sci- • an investigation into the transferability of ence (CS) papers, newstext from REALNEWS, and adapted LMs across domains and tasks; and AMAZON reviews. We choose these domains be- • a study highlighting the importance of pre- cause they have been popular in previous work, and training on human-curated datasets, and a sim- datasets for text classification are available in each. ple data selection strategy to automatically Table1 lists the specifics of the unlabeled datasets approach this performance. in all four domains, as well as ROBERTA’s training Our code as well as pretrained models for multiple corpus.1 domains and tasks are publicly available.1 3.1 Analyzing Domain Similarity 2 Background: Pretraining Before performing DAPT, we attempt to quantify the similarity of the target domain to ROBERTA’s Learning for most NLP research systems since pretraining domain. We consider domain vocab- 2018 consists of training in two stages. First, a ularies containing the top 10K most frequent uni- neural language model (LM), often with millions grams (excluding stopwords) in comparably sized of parameters, is trained on large unlabeled cor- 1For BIOMED and CS, we used an internal version of 1https://github.com/allenai/ S2ORC that contains papers that cannot be released due to dont-stop-pretraining copyright restrictions. Domain Pretraining Corpus # Tokens Size LROB. LDAPT BIOMED 2.68M full-text papers from S2ORC (Lo et al., 2020) 7.55B 47GB 1.32 0.99 CS 2.22M full-text papers from S2ORC (Lo et al., 2020) 8.10B 48GB 1.63 1.34 NEWS 11.90M articles from REALNEWS (Zellers et al., 2019) 6.66B 39GB 1.08 1.16 REVIEWS 24.75M AMAZON reviews (He and McAuley, 2016) 2.11B 11GB 2.10 1.93 ROBERTA (baseline) see Appendix xA.1 N/A 160GB z1.19 - Table 1: List of the domain-specific unlabeled datasets. In columns 5 and 6, we report ROBERTA’s masked LM loss on 50K randomly sampled held-out documents from each domain before (LROB.) and after (LDAPT) DAPT (lower implies a better fit on the sample). z indicates that the masked LM loss is estimated on data sampled from sources similar to ROBERTA’s pretraining corpus. 3.2 Experiments PT 100.0 54.1 34.5 27.3 19.2 Our LM adaptation follows the settings prescribed for training ROBERTA. We train ROBERTA on News 54.1 100.0 40.0 24.9 17.3 each domain for 12.5K steps, which amounts to single pass on each domain dataset, on a v3-8 TPU; Reviews 34.5 40.0 100.0 18.3 12.7 see other details in AppendixB. This second phase of pretraining results in four domain-adapted LMs, BioMed 27.3 24.9 18.3 100.0 21.4 one for each domain. We present the masked LM loss of ROBERTA on each domain before and after CS 19.2 17.3 12.7 21.4 100.0 DAPT in Table1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us