
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 4699–4708 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Exploring Transformer Text Generation for Medical Dataset Augmentation Ali Amin-Nejad1, Julia Ive1, Sumithra Velupillai2 1 Imperial College London, 2 King’s College London fali.amin-nejad18, [email protected], [email protected] Abstract Natural Language Processing (NLP) can help unlock the vast troves of unstructured data in clinical text and thus improve healthcare research. However, a big barrier to developments in this field is data access due to patient confidentiality which prohibits the sharing of this data, resulting in small, fragmented and sequestered openly available datasets. Since NLP model development requires large quantities of data, we aim to help side-step this roadblock by exploring the usage of Natural Language Generation in augmenting datasets such that they can be used for NLP model development on downstream clinically relevant tasks. We propose a methodology guiding the generation with structured patient information in a sequence-to-sequence manner. We experiment with state-of-the-art Transformer models and demonstrate that our augmented dataset is capable of beating our baselines on a downstream classification task. Finally, we also create a user interface and release the scripts to train generation models to stimulate further research in this area. Keywords: Natural Language Generation, Language Modelling, Ethics and Legal Issues 1. Introduction models can form sufficiently long range dependencies to be useful as a substitute for genuine training data. Therefore Natural Language Processing (NLP) has enormous poten- we believe that applying NLG approaches to medical text tial to advance many aspects of healthcare by facilitating for augmentation purposes is a worthwhile research area in the analysis of unstructured text (Esteva et al., 2019). How- order to ascertain its viability. In the long term, if success- ever a key obstacle to the development of more powerful ful, we also aim to share this synthetic data with health- NLP methods in the clinical domain is a lack of accessi- care providers and researchers to promote methodological ble data. This, coupled with the fact that state-of-the-art research and advance the SOTA, helping realise the poten- (SOTA) neural models are well known to require very large tial NLP has to offer in the medical domain. volumes of data in order to learn general and meaningful We build on the approaches of Liu (2018) and Melamud and patterns, means that progress is hindered in this area. Data Shivade (2019) in generating complex, hierarchical pas- access is usually restricted due to the constraints on sharing sages of text using a Transformer-based approach (Vaswani personal medical information for confidentiality reasons, be et al., 2017). We do this in both high-resource and low- they legal or ethical in nature (Chapman et al., 2011). resource scenarios to ensure that we assess the utility of In the Machine Learning community, similar problems are NLG data augmentation in a low resource setting - when it typically solved by using artificially generated data to aug- is inherently most needed. We experiment with two Trans- ment or perhaps even replace an original dataset (Bach- former architectures: the original vanilla architecture which man, 2016) in e.g. image processing. However, similar achieved SOTA machine-translation results (Lakew et al., approaches to data augmentation are not easily applied to 2018), and the more recent GPT-2, composed of a stack of NLP. With language being inherently more complex than Transformer decoders, which has achieved SOTA question- other domains, it is difficult to programmatically modify answering, language modelling and common-sense reason- a sentence or document without altering the meaning and ing results (Radford et al., 2019). We use this artificial coherency. Natural Language Generation (NLG) can pro- data in two clinically relevant downstream NLP tasks (un- vide a more sophisticated approach to solving this problem planned readmission prediction and phenotype classifica- and has already done so, e.g. in machine-translation with tion) to effectively assess its utility both as a standalone the technique known as back-translation (Sennrich et al., dataset, and as part of an augmented dataset alongside the 2016). With newer, more capable, NLG models - utilis- original samples. Our ultimate aim is to ascertain whether ing the Transformer architecture (Vaswani et al., 2017) - using SOTA Transformer models can generate new samples we posit that this general idea can now be extended beyond of text that are useful for data augmentation purposes - par- machine translation to longer passages of text. ticularly in low resource medical scenarios. Indeed NLG is an active area of NLP research, however Our main contributions are as follows: (i) we introduce a there are still challenges to be addressed. The replace- methodology to generate medical text for data augmenta- ment or augmentation of genuine training data with arti- tion; (ii) we demonstrate that our method shows promise ficial training data remains understudied, particularly in the by achieving significant results over our baselines on the medical domain. Attempting to achieve this manually, e.g. readmission prediction task. This result is obtained using a Suominen et al. (2015), is a costly and unscalable approach. pretrained BioBERT model. Furthermore, the application of SOTA Transformer mod- We hope that this will pave the way for healthcare profes- els for hierarchical generation beyond the sentence-level sionals in the field to appropriate this technique for the ben- also remains understudied. Since most research focuses efit of healthcare, stimulate further research, and enable the on shorter sentence-level texts, it is not clear whether these creation of entirely synthetic shareable clinical notes. 4699 2. Related Work and 7,875 neonates spanning June 2001 - October 2012. Whilst NLG is an increasingly active area of NLP research, We are particularly concerned with the NOTEEVENTS ta- current SOTA approaches have not been extensively ap- ble which comprehensively provides all the textual notes plied to the generation of medical text. Where it has, written by doctors, nurses and other healthcare profession- this has often been short excerpts of text as opposed to als during a patient’s stay. We focus solely on the Discharge longer passages usually found in Electronic Health Records Summaries, which provide the richest content about the pa- (EHRs) e.g. generation of imaging reports by Jing et al. tient’s stay at the ICU. (2018) or the generation of X-ray captions by Spinks and The MIMIC-III database contains data only for neonates Moens (2018). and adult patients (defined as being >= 15 years of age). When it comes to the task of generating full EHRs, these For the purposes of this research, we remove the neonates EHRs often do not include the free text associated with the due to the fact we believe there would be considerable and records (Choi et al., 2017), or the free text that is included is significant differences between the care of these two pa- very short, such as the approach of Lee (2018) which gener- tients groups and this would be reflected in the discharge ates chief complaints limited to 18 tokens or less. The clos- summaries for these patients. After removing these pa- est published attempt to generate long passages of text in tients, we are left with 55,404 discharge summaries for EHRs that we are aware of, is that of Liu (2018) who train a 37,400 unique patients. generative model using the public, de-identified MIMIC-III dataset (Johnson et al., 2016) and achieve reasonably coher- 3.1.1. Dataset Split ent results on multiple measures, but do not perform any ex- We split our full dataset of 55,404 discharge summaries into trinsic evaluation to assess the quality of this text on down- training, validation and test datasets in the ratio 8:1:1. In stream tasks. Another similar work is that of Melamud the low-resource scenario where we experiment with an ar- and Shivade (2019) who also utilise the MIMIC-III dataset tificially smaller dataset, we keep the same validation set to generate long passages of text and go further to study as the larger dataset and instead just shrink the size of the the utility of the artificial text in a number of downstream training and test datasets. NLP tasks. However they do not use SOTA approaches for In order to determine the size of our low-resource dataset, generation, opting for LSTMs over Transformers, and their we took inspiration from the recently introduced WikiText- downstream tasks are not clinically focused. Lastly, they do 2 and WikiText-103 datasets (Merity et al., 2016). These not study the utility of the synthetic text for augmentation datasets are collated from Wikipedia entries and are often purposes, only as a standalone dataset. used to benchmark general-domain language models. They The closest overall approach to our own is that of Wang et are named to reflect the number of words in each dataset al. (2019) who use the vanilla Transformer model to gener- with WikiText-2 containing ∼2m words and WikiText-103 ate text and then evaluate using a phenotype classification containing ∼103m words. In accordance with this nomen- task and a temporal evaluation task. Their text generation, clature, we name our low-resource and full-resource bench- however, is done at the sentence level before being joined marks, and henceforth refer to them as MimicText-9 and together to form a full EHR note. This is unlike the ap- MimicText-98 respectively. Breakdowns of these datasets proaches of Liu (2018) and Melamud and Shivade (2019) can be seen in Table 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-