Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding

Total Page:16

File Type:pdf, Size:1020Kb

Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding Dongling Xiao, Yu-Kun Li, Han Zhang, Yu Sun, Hao Tian, Hua Wu and Haifeng Wang Baidu Inc., China fxiaodongling,liyukun01,zhanghan17,sunyu02, tianhao,wu hua,[email protected] Abstract pre-trained models, BERT (Devlin et al., 2019) em- ploys masked language modeling (MLM) to learn Coarse-grained linguistic information, such as representations by masking individual tokens and named entities or phrases, facilitates adequate- predicting them based on their bidirectional context. ly representation learning in pre-training. Pre- However, BERT’s MLM focuses on the represen- vious works mainly focus on extending the ob- tations of fine-grained text units (e.g. words or jective of BERT’s Masked Language Model- ing (MLM) from masking individual tokens subwords in English and characters in Chinese), to contiguous sequences of n tokens. We ar- rarely considering the coarse-grained linguistic in- gue that such contiguously masking method formation (e.g. named entities or phrases in English neglects to model the intra-dependencies and and words in Chinese) thus incurring inadequate inter-relation of coarse-grained linguistic infor- representation learning. mation. As an alternative, we propose ERNIE- Many efforts have been devoted to integrate Gram, an explicitly n-gram masking method coarse-grained semantic information by indepen- to enhance the integration of coarse-grained in- formation into pre-training. In ERNIE-Gram, dently masking and predicting contiguous se- n-grams are masked and predicted directly us- quences of n tokens, namely n-grams, such as ing explicit n-gram identities rather than con- named entities, phrases (Sun et al., 2019b), whole tiguous sequences of n tokens. Furthermore, words (Cui et al., 2019) and text spans (Joshi et al., ERNIE-Gram employs a generator model to 2020). We argue that such contiguously masking sample plausible n-gram identities as optional strategies are less effective and reliable since the n-gram masks and predict them in both coarse- prediction of tokens in masked n-grams are inde- grained and fine-grained manners to enable pendent of each other, which neglects the intra- comprehensive n-gram prediction and rela- tion modeling. We pre-train ERNIE-Gram dependencies of n-grams. Specifically, given a on English and Chinese text corpora and fine- masked n-gram w =fx1; :::; xng; x2VF , we max- Qn tune on 19 downstream tasks. Experimental imize p(w) = i=1 p(xijc) for n-gram learning, results show that ERNIE-Gram outperforms where models learn to recover w in a huge and n previous pre-training models like XLNet and jVF j sparse prediction space F 2R . Note that VF RoBERTa by a large margin, and achieves is the fine-grained vocabulary1 and c is the context. comparable results with state-of-the-art meth- We propose ERNIE-Gram, an explicitly n- ods. The source codes and pre-trained mod- els have been released at https://github. gram masked language modeling method in com/PaddlePaddle/ERNIE. which n-grams are masked with single [MASK] symbols, and predicted directly using explicit n- 1 Introduction gram identities rather than sequences of tokens, as depicted in Figure1(b). The models learn to Pre-trained on large-scaled text corpora and fine- predict n-gram w in a small and dense prediction jV j tuned on downstream tasks, self-supervised rep- space N 2 R N , where VN indicates a prior n- 2 n resentation models (Radford et al., 2018; Devlin gram lexicon and normally jVN j jVF j . To et al., 2019; Liu et al., 2019; Yang et al., 2019; 1 Lan et al., 2020; Clark et al., 2020) have achieved VF contains 30K BPE codes in BERT (Devlin et al., 2019) and 50K subword units in RoBERTa (Liu et al., 2019). remarkable improvements in natural language un- 2 VN contains 300K n-grams, where n 2 [2; 4) in this pa- derstanding (NLU). As one of the most prominent per, n-grams are extracted in word-level before tokenization. 1702 Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1702–1715 June 6–11, 2021. ©2021 Association for Computational Linguistics z 2 z 4 x2 x3 x5 y2 y4 x2 x3 y2 x5 y4 Embedding Layer ×L ×L ×L Fine-grained Transformer Encoder Transformer Encoder Transformer Encoder Classifier N-gram Classifier x1 x4 x6 x1 x4 x6 x1 x4 x6 + + + + + + + + + + + + + + + + i 1 2 3 4 5 6 1 2 3 4 5 1 2 3 4 5 Positional Id (a) Contiguously MLM (b) Explicitly N-gram MLM (c) Comprehensive N-gram MLM Figure 1: Illustrations of different MLM objectives, where xi and yi represent the identities of fine-grained tokens h×|VF j and explicit n-grams respectively. Note that the weights of fine-grained classifier (WF 2 R ) and N-gram h×|hVF ;VN ij classifier (WN 2R ) are not used in fine-tuning stage, where h is the hidden size and L is the layers. learn the semantic of n-grams more adequately, Net (Yang et al., 2019) adopts permutation lan- we adopt a comprehensive n-gram prediction guage modeling (PLM) to model the dependencies mechanism, simultaneously predicting masked n- among predicted tokens. ELECTRA introduces grams in coarse-grained (explicit n-gram identities) replaced token detection (RTD) objective to learn and fine-grained (contained token identities) man- all tokens for more compute-efficient pre-training. ners with well-designed attention mask metrics, as shown in Figure1(c). 2.2 Coarse-grained Linguistic Information In addition, to model the semantic relationships Incorporating for Pre-Training between n-grams directly, we introduce an en- Coarse-grained linguistic information is indispens- hanced n-gram relation modeling mechanism, able for adequate representation learning. There masking n-grams with plausible n-grams identities are lots of studies that implicitly integrate coarse- sampled from a generator model, and then recov- grained information by extending BERT’s MLM to ering them to the original n-grams with the pair contiguously masking and predicting contiguous se- relation between plausible and original n-grams. quences of tokens. For example, ERNIE (Sun et al., Inspired by ELECTRA (Clark et al., 2020), we in- 2019b) masks named entities and phrases to en- corporate the replaced token detection objective to hance contextual representations, BERT-wwm (Cui distinguish original n-grams from plausible ones, et al., 2019) masks whole Chinese words to achieve which enhances the interactions between explicit better Chinese representations, SpanBERT (Joshi n-grams and fine-grained contextual tokens. et al., 2020) masks contiguous spans to improve In this paper, we pre-train ERNIE-Gram on both the performance on span selection tasks. base-scale and large-scale text corpora (16GB and A few studies attempt to inject the coarse- 160GB respectively) under comparable pre-training grained n-gram representations into fine-grained setting. Then we fine-tune ERNIE-Gram on 13 En- contextualized representations explicitly, such as glish NLU tasks and 6 Chinese NLU tasks. Experi- ZEN (Diao et al., 2020) and AMBERT (Zhang and mental results show that ERNIE-Gram consistently Li, 2020), in which additional transformer encoders outperforms previous well-performed pre-training and computations for explicit n-gram representa- models on various benchmarks by a large margin. tions are incorporated into both pre-training and fine-tuning. Li et al., 2019 demonstrate that explicit 2 Related Work n-gram representations are not sufficiently reliable 2.1 Self-Supervised Pre-Training for NLU for NLP tasks because of n-gram data sparsity and Self-supervised pre-training has been used to learn the ubiquity of out-of-vocabulary n-grams. Differ- contextualized sentence representations though var- ently, we only incorporate n-gram information by ious training objectives. GPT (Radford et al., leveraging auxiliary n-gram classifier and embed- 2018) employs unidirectional language modeling ding weights in pre-training, which will be com- (LM) to exploit large-scale corpora. BERT (Devlin pletely removed during fine-tuning, so our method et al., 2019) proposes masked language modeling maintains the same parameters and computations (MLM) to learn bidirectional representations ef- as BERT. ficiently, which is a representative objective for pre-training and has numerous extensions such 3 Proposed Method as RoBERTa (Liu et al., 2019), UNILM (Dong In this section, we present the detailed implemen- et al., 2019) and ALBERT (Lan et al., 2020). XL- tation of ERNIE-Gram, including n-gram lexicon 1703 (a) N-gram Fine-grained VN extraction in Section 3.5, explicitly n-gram y2 Identities y4 x2 x3 Identities x5 Multi-Head MLM pre-training objective in Section 3.2, compre- ×L Self-Attention hensive n-gram prediction and relation modeling K,V Q K,V Q K,V Q K,V Q K,V Q mechanisms in Section 3.3 and 3.4. x1 x4 x6 Tokens + + + + + + + + 3.1 Background 1 2 3 4 5 2 2 Positions 4 (b) To inject n-gram information into pre-training, 1 2 3 4 5 2 2 4 K,V Allow to attend Q + + + + + + + + many works (Sun et al., 2019b; Cui et al., 2019; x1 x4 x6 Prevent from 1 + x Joshi et al., 2020) extend BERT’s masked language 1 attending modeling (MLM) from masking individual tokens 2 + 3 + x4 y ,y to contiguous sequences of n tokens. Predict { 2 4 } 4 + Contiguously MLM. Given input sequence x= 5 + x6 2 + fx1; :::; x g; x 2 VF and n-gram starting bound- jxj + 2 Predict {x 2 , x 3 , x 5} aries b = fb1; :::; b g, let z = fz1; :::; z g to jbj jb|−1 4 + be the sequence of n-grams, where zi =x[bi:bi+1), MLM samples 15% of starting boundaries from Figure 2: (a) Detailed structure of Comprehensive N- gram MLM.
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Bros:Apre-Trained Language Model for Understanding Textsin Document
    Under review as a conference paper at ICLR 2021 BROS: A PRE-TRAINED LANGUAGE MODEL FOR UNDERSTANDING TEXTS IN DOCUMENT Anonymous authors Paper under double-blind review ABSTRACT Understanding document from their visual snapshots is an emerging and chal- lenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text seg- ments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from pre- vious pre-training methods on 1D text, BROS is pre-trained on large-scale semi- structured documents with a novel area-masking strategy while efficiently includ- ing the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graph- based decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts struc- tured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others.
    [Show full text]
  • Information Extraction Based on Named Entity for Tourism Corpus
    Information Extraction based on Named Entity for Tourism Corpus Chantana Chantrapornchai Aphisit Tunsakul Dept. of Computer Engineering Dept. of Computer Engineering Faculty of Engineering Faculty of Engineering Kasetsart University Kasetsart University Bangkok, Thailand Bangkok, Thailand [email protected] [email protected] Abstract— Tourism information is scattered around nowa- The ontology is extracted based on HTML web structure, days. To search for the information, it is usually time consuming and the corpus is based on WordNet. For these approaches, to browse through the results from search engine, select and the time consuming process is the annotation which is to view the details of each accommodation. In this paper, we present a methodology to extract particular information from annotate the type of name entity. In this paper, we target at full text returned from the search engine to facilitate the users. the tourism domain, and aim to extract particular information Then, the users can specifically look to the desired relevant helping for ontology data acquisition. information. The approach can be used for the same task in We present the framework for the given named entity ex- other domains. The main steps are 1) building training data traction. Starting from the web information scraping process, and 2) building recognition model. First, the tourism data is gathered and the vocabularies are built. The raw corpus is used the data are selected based on the HTML tag for corpus to train for creating vocabulary embedding. Also, it is used building. The data is used for model creation for automatic for creating annotated data.
    [Show full text]
  • NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and Dlpy Doug Cairns and Xiangxiang Meng, SAS Institute Inc
    Paper SAS4429-2020 NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and DLPy Doug Cairns and Xiangxiang Meng, SAS Institute Inc. ABSTRACT A revolution is taking place in natural language processing (NLP) as a result of two ideas. The first idea is that pretraining a deep neural network as a language model is a good starting point for a range of NLP tasks. These networks can be augmented (layers can be added or dropped) and then fine-tuned with transfer learning for specific NLP tasks. The second idea involves a paradigm shift away from traditional recurrent neural networks (RNNs) and toward deep neural networks based on Transformer building blocks. One architecture that embodies these ideas is Bidirectional Encoder Representations from Transformers (BERT). BERT and its variants have been at or near the top of the leaderboard for many traditional NLP tasks, such as the general language understanding evaluation (GLUE) benchmarks. This paper provides an overview of BERT and shows how you can create your own BERT model by using SAS® Deep Learning and the SAS DLPy Python package. It illustrates the effectiveness of BERT by performing sentiment analysis on unstructured product reviews submitted to Amazon. INTRODUCTION Providing a computer-based analog for the conceptual and syntactic processing that occurs in the human brain for spoken or written communication has proven extremely challenging. As a simple example, consider the abstract for this (or any) technical paper. If well written, it should be a concise summary of what you will learn from reading the paper. As a reader, you expect to see some or all of the following: • Technical context and/or problem • Key contribution(s) • Salient result(s) If you were tasked to create a computer-based tool for summarizing papers, how would you translate your expectations as a reader into an implementable algorithm? This is the type of problem that the field of natural language processing (NLP) addresses.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Question Answering by Bert
    Running head: QUESTION AND ANSWERING USING BERT 1 Question and Answering Using BERT Suman Karanjit, Computer SCience Major Minnesota State University Moorhead Link to GitHub https://github.com/skaranjit/BertQA QUESTION AND ANSWERING USING BERT 2 Table of Contents ABSTRACT .................................................................................................................................................... 3 INTRODUCTION .......................................................................................................................................... 4 SQUAD ............................................................................................................................................................ 5 BERT EXPLAINED ...................................................................................................................................... 5 WHAT IS BERT? .......................................................................................................................................... 5 ARCHITECTURE ............................................................................................................................................ 5 INPUT PROCESSING ...................................................................................................................................... 6 GETTING ANSWER ........................................................................................................................................ 8 SETTING UP THE ENVIRONMENT. ....................................................................................................
    [Show full text]
  • Distilling BERT for Natural Language Understanding
    TinyBERT: Distilling BERT for Natural Language Understanding Xiaoqi Jiao1∗,y Yichun Yin2∗z, Lifeng Shang2z, Xin Jiang2 Xiao Chen2, Linlin Li3, Fang Wang1z and Qun Liu2 1Key Laboratory of Information Storage System, Huazhong University of Science and Technology, Wuhan National Laboratory for Optoelectronics 2Huawei Noah’s Ark Lab 3Huawei Technologies Co., Ltd. fjiaoxiaoqi,[email protected] fyinyichun,shang.lifeng,[email protected] fchen.xiao2,lynn.lilinlin,[email protected] Abstract natural language processing (NLP). Pre-trained lan- guage models (PLMs), such as BERT (Devlin et al., Language model pre-training, such as BERT, has significantly improved the performances 2019), XLNet (Yang et al., 2019), RoBERTa (Liu of many natural language processing tasks. et al., 2019), ALBERT (Lan et al., 2020), T5 (Raf- However, pre-trained language models are usu- fel et al., 2019) and ELECTRA (Clark et al., 2020), ally computationally expensive, so it is diffi- have achieved great success in many NLP tasks cult to efficiently execute them on resource- (e.g., the GLUE benchmark (Wang et al., 2018) restricted devices. To accelerate inference and the challenging multi-hop reasoning task (Ding and reduce model size while maintaining et al., 2019)). However, PLMs usually have a accuracy, we first propose a novel Trans- former distillation method that is specially de- large number of parameters and take long infer- signed for knowledge distillation (KD) of the ence time, which are difficult to be deployed on Transformer-based models. By leveraging this edge devices such as mobile phones. Recent stud- new KD method, the plenty of knowledge en- ies (Kovaleva et al., 2019; Michel et al., 2019; Voita coded in a large “teacher” BERT can be ef- et al., 2019) demonstrate that there is redundancy fectively transferred to a small “student” Tiny- in PLMs.
    [Show full text]
  • Arxiv:2104.05274V2 [Cs.CL] 27 Aug 2021 NLP field Due to Its Excellent Performance in Various Tasks
    Learning to Remove: Towards Isotropic Pre-trained BERT Embedding Yuxin Liang1, Rui Cao1, Jie Zheng?1, Jie Ren2, and Ling Gao1 1 Northwest University, Xi'an, China fliangyuxin,[email protected], fjzheng,[email protected] 2 Shannxi Normal University, Xi'an, China [email protected] Abstract. Research in word representation shows that isotropic em- beddings can significantly improve performance on downstream tasks. However, we measure and analyze the geometry of pre-trained BERT embedding and find that it is far from isotropic. We find that the word vectors are not centered around the origin, and the average cosine similar- ity between two random words is much higher than zero, which indicates that the word vectors are distributed in a narrow cone and deteriorate the representation capacity of word embedding. We propose a simple, and yet effective method to fix this problem: remove several dominant directions of BERT embedding with a set of learnable weights. We train the weights on word similarity tasks and show that processed embed- ding is more isotropic. Our method is evaluated on three standardized tasks: word similarity, word analogy, and semantic textual similarity. In all tasks, the word embedding processed by our method consistently out- performs the original embedding (with average improvement of 13% on word analogy and 16% on semantic textual similarity) and two baseline methods. Our method is also proven to be more robust to changes of hyperparameter. Keywords: Natural language processing · Pre-trained embedding · Word representation · Anisotropic. 1 Introduction With the rise of Transformers [12], its derivative model BERT [1] stormed the arXiv:2104.05274v2 [cs.CL] 27 Aug 2021 NLP field due to its excellent performance in various tasks.
    [Show full text]
  • Efficient Training of BERT by Progressively Stacking
    Efficient Training of BERT by Progressively Stacking Linyuan Gong 1 Di He 1 Zhuohan Li 1 Tao Qin 2 Liwei Wang 1 3 Tie-Yan Liu 2 Abstract especially in domains that require particular expertise. Unsupervised pre-training is commonly used in In natural language processing, using unsupervised pre- natural language processing: a deep neural net- trained models is one of the most effective ways to help work trained with proper unsupervised prediction train tasks in which labeled information is not rich enough. tasks are shown to be effective in many down- For example, word embedding learned from Wikipedia cor- stream tasks. Because it is easy to create a large pus (Mikolov et al., 2013; Pennington et al., 2014) can monolingual dataset by collecting data from the substantially improve the performance of sentence classifi- Web, we can train high-capacity models. There- cation and textual similarity systems (Socher et al., 2011; fore, training efficiency becomes a critical issue Tai et al., 2015; Kalchbrenner et al., 2014). Recently, pre- even when using high-performance hardware. In trained contextual representation approaches (Devlin et al., this paper, we explore an efficient training method 2018; Radford et al., 2018; Peters et al., 2018) have been for the state-of-the-art bidirectional Transformer developed and shown to be more effective than conven- (BERT) model. By visualizing the self-attention tional word embedding. Different from word embedding distributions of different layers at different po- that only extracts local semantic information of individual sitions in a well-trained BERT model, we find words, pre-trained contextual representations further learn that in most layers, the self-attention distribution sentence-level information by sentence-level encoders.
    [Show full text]
  • Word Sense Disambiguation with Transformer Models
    Word Sense Disambiguation with Transformer Models Pierre-Yves Vandenbussche Tony Scerri Ron Daniel Jr. Elsevier Labs Elsevier Labs Elsevier Labs Radarweg 29, 1 Appold Street, 230 Park Avenue, Amsterdam 1043 NX, London EC2A 2UT, UK New York, NY, 10169, Netherlands [email protected] USA [email protected] [email protected] Abstract given hypernym. In Subtask 3 the system can use both the sentence and the hypernym in making the In this paper, we tackle the task of Word decision. Sense Disambiguation (WSD). We present our system submitted to the Word-in-Context Tar- The dataset provided with the WiC-TSV chal- get Sense Verification challenge, part of the lenge has relatively few sense annotated examples SemDeep workshop at IJCAI 2020 (Breit et al., (< 4; 000) and with a single target sense per word. 2020). That challenge asks participants to pre- This makes pre-trained Transformer models well dict if a specific mention of a word in a text suited for the task since the small amount of data matches a pre-defined sense. Our approach would limit the learning ability of a typical super- uses pre-trained transformer models such as vised model trained from scratch. BERT that are fine-tuned on the task using different architecture strategies. Our model Thanks to the recent advances made in language achieves the best accuracy and precision on models such as BERT (Devlin et al., 2018) or XL- Subtask 1 – make use of definitions for decid- Net (Yang et al., 2019) trained on large corpora, ing whether the target word in context corre- neural language models have established the state- sponds to the given sense or not.
    [Show full text]
  • Improving the Prosody of RNN-Based English Text-To-Speech Synthesis by Incorporating a BERT Model
    INTERSPEECH 2020 October 25–29, 2020, Shanghai, China Improving the Prosody of RNN-based English Text-To-Speech Synthesis by Incorporating a BERT model Tom Kenter, Manish Sharma, Rob Clark Google UK ftomkenter, skmanish, [email protected] Abstract the pretrained model, whereas it is typically hard for standard parsing methods to resolve this ambiguity. Sentences that are The prosody of currently available speech synthesis systems challenging linguistically (e.g., because they are long or other- can be unnatural due to the systems only having access to the wise difficult to parse) can benefit from the new approach based text, possibly enriched by linguistic information such as part- on BERT representations rather than explicit features, as any of-speech tags and parse trees. We show that incorporating errors in the parse information no longer impair performance. a BERT model in an RNN-based speech synthesis model — where the BERT model is pretrained on large amounts of un- We show that human raters favour CHiVE-BERT over a labeled data, and fine-tuned to the speech domain — improves current state-of-the-art model on three datasets designed to prosody. Additionally, we propose a way of handling arbitrar- highlight different prosodic phenomena. Additionally, we pro- ily long sequences with BERT. Our findings indicate that small pose a way of handling long sequences with BERT (which has BERT models work better than big ones, and that fine-tuning fixed-length inputs and outputs), so it can deal with arbitrarily the BERT part of the model is pivotal for getting good results. long input.
    [Show full text]
  • Recombining Frames and Roles in Frame-Semantic Parsing
    Breeding Fillmore’s Chickens and Hatching the Eggs: Recombining Frames and Roles in Frame-Semantic Parsing Gosse Minnema and Malvina Nissim Center for Language and Cognition University of Groningen, The Netherlands fg.f.minnema, [email protected] Abstract (given a predicate-frame pair, find and label its argu- ments). Some recent systems, such as the LSTM- Frame-semantic parsers traditionally predict based Open-SESAME and (Swayamdipta et al., predicates, frames, and semantic roles in a 2017) or the classical-statistical SEMAFOR (Das fixed order. This paper explores the ‘chicken- et al., 2014), implement the full pipeline, but with a or-egg’ problem of interdependencies between these components theoretically and practically. strong focus specifically on argID. Other models We introduce a flexible BERT-based sequence implement some subset of the components (Tan, labeling architecture that allows for predict- 2007; Hartmann et al., 2017; Yang and Mitchell, ing frames and roles independently from each 2017; Peng et al., 2018), while still implicitly adopt- other or combining them in several ways. ing the pipeline’s philosophy.1 However, little fo- Our results show that our setups can approxi- cus has been given to frame-semantic parsing as mate more complex traditional models’ perfor- an end-to-end task, which entails not only imple- mance, while allowing for a clearer view of the interdependencies between the pipeline’s com- menting the separate components of the pipeline, ponents, and of how frame and role prediction but also looking at their interdependencies. models make different use of BERT’s layers. We highlight such interdependencies from a the- oretical perspective, and investigate them empiri- 1 Introduction cally.
    [Show full text]