Biobert: a Pre-Trained Biomedical Language Representation Model For

Biobert: a Pre-Trained Biomedical Language Representation Model For

Bioinformatics, 2019, 1–7 doi: 10.1093/bioinformatics/btz682 Advance Access Publication Date: 10 September 2019 Original Paper Downloaded from https://academic.oup.com/bioinformatics/advance-article-abstract/doi/10.1093/bioinformatics/btz682/5566506 by guest on 18 October 2019 Data and text mining BioBERT: a pre-trained biomedical language representation model for biomedical text mining Jinhyuk Lee 1,†, Wonjin Yoon 1,†, Sungdong Kim 2, Donghyeon Kim 1, Sunkyu Kim 1, Chan Ho So 3 and Jaewoo Kang 1,3,* 1Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea, 2Clova AI Research, Naver Corp, Seong- Nam 13561, Korea and 3Interdisciplinary Graduate Program in Bioinformatics, Korea University, Seoul 02841, Korea *To whom correspondence should be addressed. †The authors wish it to be known that the first two authors contributed equally. Associate Editor: Jonathan Wren Received on May 16, 2019; revised on July 29, 2019; editorial decision on August 25, 2019; accepted on September 5, 2019 Abstract Motivation: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from bio- medical literature has gained popularity among researchers, and deep learning has boosted the development of ef- fective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text min- ing often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation: We make the pre-trained weights of BioBERT freely available at https://github. com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/ biobert. Contact: [email protected] 1 Introduction Recent progress of biomedical text mining models was made The volume of biomedical literature continues to rapidly increase. possible by the advancements of deep learning techniques used in On average, more than 3000 new articles are published every day in natural language processing (NLP). For instance, Long Short-Term peer-reviewed journals, excluding pre-prints and technical reports Memory (LSTM) and Conditional Random Field (CRF) have greatly such as clinical trial reports in various archives. PubMed alone has a improved performance in biomedical named entity recognition total of 29M articles as of January 2019. Reports containing valu- (NER) over the last few years (Giorgi and Bader, 2018; Habibi able information about new discoveries and new insights are con- et al., 2017; Wang et al., 2018; Yoon et al., 2019). Other deep learn- tinuously added to the already overwhelming amount of literature. ing based models have made improvements in biomedical text min- Consequently, there is increasingly more demand for accurate bio- ing tasks such as relation extraction (RE) (Bhasuran and Natarajan, medical text mining tools for extracting information from the 2018; Lim and Kang, 2018) and question answering (QA) (Wiese literature. et al., 2017). VC The Author(s) 2019. Published by Oxford University Press. 1 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. 2 J.Lee et al. However, directly applying state-of-the-art NLP methodologies • Compared with most previous biomedical text mining models to biomedical text mining has limitations. First, as recent word rep- that are mainly focused on a single task such as NER or QA, our resentation models such as Word2Vec (Mikolov et al., 2013), ELMo model BioBERT achieves state-of-the-art performance on various (Peters et al., 2018) and BERT (Devlin et al., 2019) are trained and biomedical text mining tasks, while requiring only minimal tested mainly on datasets containing general domain texts (e.g. Wikipedia), it is difficult to estimate their performance on datasets architectural modifications. containing biomedical texts. Also, the word distributions of general • We make our pre-processed datasets, the pre-trained weights of Downloaded from https://academic.oup.com/bioinformatics/advance-article-abstract/doi/10.1093/bioinformatics/btz682/5566506 by guest on 18 October 2019 and biomedical corpora are quite different, which can often be a BioBERT and the source code for fine-tuning BioBERT publicly problem for biomedical text mining models. As a result, recent mod- available. els in biomedical text mining rely largely on adapted versions of word representations (Habibi et al., 2017; Pyysalo et al., 2013). In this study, we hypothesize that current state-of-the-art word 3 Materials and methods representation models such as BERT need to be trained on biomed- ical corpora to be effective in biomedical text mining tasks. BioBERT basically has the same structure as BERT. We briefly dis- Previously, Word2Vec, which is one of the most widely known con- cuss the recently proposed BERT, and then we describe in detail the text independent word representation models, was trained on bio- pre-training and fine-tuning process of BioBERT. medical corpora which contain terms and expressions that are usually not included in a general domain corpus (Pyysalo et al., 3.1 BERT: bidirectional encoder representations from 2013). While ELMo and BERT have proven the effectiveness of con- textualized word representations, they cannot obtain high perform- transformers ance on biomedical corpora because they are pre-trained on only Learning word representations from a large amount of unannotated general domain corpora. As BERT achieves very strong results on text is a long-established method. While previous models (e.g. various NLP tasks while using almost the same structure across the Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014)) tasks, adapting BERT for the biomedical domain could potentially focused on learning context independent word representations, re- benefit numerous biomedical NLP researches. cent works have focused on learning context dependent word repre- sentations. For instance, ELMo (Peters et al., 2018) uses a bidirectional language model, while CoVe (McCann et al., 2017) 2 Approach uses machine translation to embed context information into word representations. In this article, we introduce BioBERT, which is a pre-trained language BERT (Devlin et al., 2019) is a contextualized word representa- representation model for the biomedical domain. The overall process tion model that is based on a masked language model and pre- of pre-training and fine-tuning BioBERT is illustrated in Figure 1. First, trained using bidirectional transformers (Vaswani et al., 2017). Due we initialize BioBERT with weights from BERT, which was pre- to the nature of language modeling where future words cannot be trained on general domain corpora (English Wikipedia and seen, previous language models were limited to a combination of BooksCorpus). Then, BioBERT is pre-trained on biomedical domain two unidirectional language models (i.e. left-to-right and right-to- corpora (PubMed abstracts and PMC full-text articles). To show the ef- left). BERT uses a masked language model that predicts randomly fectiveness of our approach in biomedical text mining, BioBERT is masked words in a sequence, and hence can be used for learning bi- fine-tuned and evaluated on three popular biomedical text mining tasks directional representations. Also, it obtains state-of-the-art perform- (NER, RE and QA). We test various pre-training strategies with differ- ance on most NLP tasks, while requiring minimal task-specific ent combinations and sizes of general domain corpora and biomedical architectural modification. According to the authors of BERT, corpora, and analyze the effect of each corpus on pre-training. We also incorporating information from bidirectional representations, rather provide in-depth analyses of BERT and BioBERT to show the necessity than unidirectional representations, is crucial for representing words of our pre-training strategies. in natural language. We hypothesize that such bidirectional repre- The contributions of our paper are as follows: sentations are also critical in biomedical text mining as complex relationships between biomedical terms often exist in a biomedical • BioBERT is the first domain-specific BERT

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us