Leveraging Pre-Trained Checkpoints for Sequence Generation Tasks

Leveraging Pre-Trained Checkpoints for Sequence Generation Tasks

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks Sascha Rothe Shashi Narayan Aliaksei Severyn Google Research Google Research Google Research [email protected] [email protected] [email protected] Abstract from a pre-trained checkpoint typically requires orders of magnitude fewer fine-tuning steps while Unsupervised pre-training of large neural mod- delivering significant performance boosts. More elshas recentlyrevolutionized Natural Language importantly, the ability to bootstrap from a Processing. By warm-starting from the pub- state-of-the-art performing model such as BERT licly released checkpoints, NLP practitioners (Devlin et al., 2019) motivates the community to have pushed the state-of-the-art on multiple greatly speed up the progress towards developing benchmarks while saving significant amounts better and easily reusable NLU systems. of compute time. So far the focus has been While we continue to observe an increasing mainly on the Natural Language Understanding number of papers building on top of BERT and/or tasks. In this paper, we demonstrate the efficacy GPT models reporting encouraging improvements of pre-trained checkpoints for Sequence Gen- on GLUE, SQuAD, and other similar benchmarks, eration. We developed a Transformer-based very little attention has been paid to using these sequence-to-sequence model that is compati- pre-trained models to warm-start sequence-to- ble with publicly available pre-trained BERT, sequence (seq2seq) models. It has been argued GPT-2, and RoBERTa checkpoints and con- thatthe pre-trainingobjective usedby BERT is not ductedan extensiveempirical study on the utility well-suited for tasks that require decoding texts, of initializing our model, both encoder and forexample, conditional text generation in machine decoder, with these checkpoints. Our models translation and summarization (Yang et al., 2019). result in new state-of-the-art results on Nevertheless, it remains unclear to what extent Machine Translation, Text Summarization, using such large models pre-trained on large Sentence Splitting, and Sentence Fusion. collections of text can be beneficial to warm-start seq2seq generation models. 1 Introduction In this paper, we report on a Transformer-based seq2seq model that is compatible with publicly Unsupervised and self-supervised pre-training availablepre-trained BERT, GPT-2, and RoBERTa methods, such as ELMo (Peters et al., 2018), checkpoints. We aim to provide an empirical ULMFiT (Howard and Ruder, 2018), and more answer to the following research question: What recently BERT (Devlin et al., 2019), GPT and is the best way to leverage publicly available pre- GPT-2 (Radford et al., 2018, 2019), XLNet trained checkpoints for warm-starting sequence (Yang et al., 2019), and RoBERTa (Liu et al., generation models? For example, one could 2019) have established a qualitatively new level imagine using a BERT checkpoint to initialize of baseline performance for many widely used the encoder for better input understanding and Natural Language Understanding (NLU) bench- choosing GPT-2 model as the decoder for better marks including some of the most popular, like text generation. One of the main contributions of GLUE (Williams et al., 2018) and SQuAD this paper is that we rigorously experiment with (Rajpurkar et al., 2018). a large number of different settings to combine The most appealing part about this massive BERT, GPT, and RoBERTa pre-trained check- shift towards using large architectures pre-trained points to initialize our Transformer-based model. on large collections of texts is that the pre- We report results on three canonical conditional trained checkpoints along with the inference code text generation tasks of increasing complexity: are made freely available. This saves hundreds sentence-level fusion (DiscoFuse, Geva et al., of TPU/GPU hours, as warm-starting a model 2019) and splitting (WikiSplit, Botha et al., 2018), 264 Transactions of the Association for Computational Linguistics, vol. 8, pp. 264–280, 2020. https://doi.org/10.1162/tacl a 00313 Action Editor: Wenjie (Maggie) Li. Submission batch: 9/2019; Revision batch: 12/2019; Published 6/2020. c 2020 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. WMT14 En↔De machine translation using most difference between a BERT compatible decoder commonevalsets: newstest2014 and newstest2016, and a GPT-2 compatible decoder. and abstractive summarization using three data- Most of the models use the base checkpoint and sets: Gigaword (Napoles et al., 2012), CNN and therefore have 12 layers, a hidden size of 768, DailyMail (Hermann et al., 2015), and BBC filter size of 3,072, and 12 attention heads. We extreme (Narayan et al., 2018a). chose the best-performing model and also collect Our models report significant improvements numbers using larger pre-trained checkpoints. over randomly initialized models, demonstrating These models have 24 layers, a hidden size of the benefit of leveraging unsupervised pre-trained 1,024, filter size of 4,096, and 16 attention heads. models. More importantly, this simple strategy All models were fine-tuned on the target task results in new state-of-the-art results on machine using Adam with a learning rate of 0.05. We translation, text summarization, sentence splitting, used a linear learning rate warmup with 40k steps, and sentence fusion. Our results also demonstrate normalization by the square root of the hidden that a pre-trained encoder is an essential compo- size, and a square root decay. We did not perform nent for sequence generation tasks and often these any tuning of these hyperparameters (except for tasks benefit from sharing the weights between §5). The batch size and the number of training the encoder and the decoder. Overall, we have steps will be reported for each task individually. run over 300 experiments spending thousands of BERT Checkpoints. We tokenize our text TPU v3 hoursto better accommodate the language usingtheWordPiece(Wuetal.,2016)to match the modeling and understanding capabilities of these BERT pre-trained vocabulary. Depending on the pre-trained models for text generation. We believe experiment, we use one of the following publicly that NLP researchers and practitioners will derive available checkpoints: BERT-Base Cased, BERT- Base Uncased, BERT-Base Multilingual Cased actionable insights from our findings when (Devlin et al., 2019).1 The first two checkpoints tackling various seq2seq tasks. Thecodetoqueryourmodelsandpredictionson have a vocabulary size of around ∼30k word- various benchmarks will be available at https:// pieces, whereas the multilingual checkpoint has github.com/google-research/google- a much larger vocabulary size of ∼110k. BERT research/tree/master/bertseq2seq. also trains positional embeddings for up to 512 positions, which is the maximum input and output length in all experiments. 2 Models and Pre-trained Checkpoints GPT-2 Checkpoints. We tokenize our text using the SentencePieces (Kudo and Richardson, BERT was primarily developed for encoding text 2018) to match the GPT-2 pre-trained vocab- representations for NLU tasks (encoder-only ulary.2 Note that, although the available check- architecture), whereas GPT-2 (Radford et al., point is frequently called 117M, which suggests 2019), was primarily developed as a decoder-only the same number of parameters, we count 125M architecture for language modeling. Our model parameters in the checkpoint. This is the smallest uses a seq2seq architecture with encoder and architecture they trained, and the number of decoder both composed of Transformer layers layers, hidden size, and filter size are comparable (Vaswani et al., 2017). For the encoder, we inherit to BERT-Base. The model was trained mainly the BERT Transformer layer implementations on English data but does contain some foreign (Devlin et al., 2019), which differs slightly from language. The vocabulary size is ∼50k. While the canonical Transformer layer (Vaswani et al., GPT-2 has positional embeddings for up to 1,024 2017); BERT uses a GELU activation (Hendrycks positions, we only use the first 512 to make the and Gimpel, 2016) rather than the standard RELU. results comparable with BERT. If not stated otherwise, the implementation of the RoBERTa Checkpoints. RoBERTa (Liu et al., decoder layers are also identical to the BERT 2019) is trained using PyTorch, but we found implementation with two adjustments. First, the that the learned parameters are fully compatible self-attention mechanism is masked to look only 1 at the left context. Secondly, we add an encoder- BERT checkpoints are available at https:// github.com/google-research/bert. decoder attention mechanism. Note, that if the 2GPT-2 checkpoints are available at https:// model was randomly initialized, we found no github.com/openai/gpt-2. 265 total embed. init. random bidirectional self-attention mechanism of BERT RND2RND 221M 23M 0 221M to look only at the left context. BERT2RND 221M 23M 109M 112M RND2BERT 221M 23M 109M 26M BERT2BERT A BERT-initialized encoder paired BERT2BERT 221M 23M 195M 26M with a BERT-initialized decoder. All weights are BERTSHARE 136M 23M 109M 26M initialized from a public BERT checkpoint. The ROBERTASHARE 152M 39M 125M 26M GPT 125M 39M 125M 0 only variable that is initialized randomly is the RND2GPT 238M 39M 125M 114M encoder-decoder attention. BERT2GPT 260M 62M 234M 26M BERTSHARE Like BERT2BERT, but the parameters ROBERTA2GPT 276M 78M 250M 26M between encoder and decoder are shared. This Table 1: The number of total trainable

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us