Multi-task weak supervision enables automated abnormality localization in whole-body FDG-PET/CT Sabri Eyuboglu∗1, Geoffrey Angus∗1, Bhavik N. Patel2, Anuj Pareek2, Guido Davidzon2, Jared Dunnmon,∗∗1 Matthew P. Lungren ∗∗2 1 Department of Computer Science, Stanford University, Stanford, CA 94305, USA 2 Department of Radiology, Stanford University, Stanford, CA 94305, USA ∗ ∗∗ Equal contribution; Computational decision support systems could provide clinical value in whole-body FDG- PET/CT workflows. However, the lack of labeled data combined with the sheer size of each scan make it challenging to apply existing supervised machine learning systems. Leveraging recent advancements in natural language processing, we develop a weak supervision frame- work that extracts imperfect, yet highly granular regional abnormality labels from the radi- ologist reports associated with each scan. Our framework automatically labels each region in a custom ontology of anatomical regions, providing a structured profile of the pathologies in each scan. We design an attention-based, multi-task CNN architecture that can be trained using these generated labels to localize abnormalities in whole-body scans. We show that our multi-task representation is critical for strong performance on rare abnormalities for which we have little data and for accurately predicting mortality from imaging data alone (AUROC 80%), a critical challenge for palliative care teams. 1 Introduction The availability of large, labeled datasets has fueled recent progress in machine learning for med- ical imaging. Breakthrough studies in chest radiograph diagnosis, skin and mammographic lesion classification, and diabetic retinopathy detection have relied on hundreds of thousands of labeled training examples [1–4]. However, for many diagnostic interpretation tasks, labeled datasets of this size are not readily available either because (1) the task includes rare diagnoses for which training examples are hard to find, and/or (2) the diagnoses we are interested in may not be recorded in a structured way within electronic medical records, requiring physicians to manually reinterpret exams or extract labels from written reports. Even if datasets are manually annotated, common changes in the underlying data distribution (e.g. scanner type, imaging protocol, post-processing techniques, patient population, or clinical classification schema) could rapidly render the models they support obsolete. Further, it is commonly the case that medical datasets are labeled using incomplete label ontologies, leading to undesirable variation in performance on unlabeled subsets of the data (e.g. rare disease types), a problem commonly referred to as hidden stratification [5]. These challenges are particularly apparent when working with whole-body fluorodeoxyglucose- positron emission tomography/computed tomography (FDG-PET/CT), a medical imaging modal- ity with a critical role in the staging and treatment response assessment of cancer. FDG-PET/CT combines the high sensitivity of PET, which enables the detection of disease at picomolar concen- trations, with the higher specificity of CT and thus is more accurate than either technology alone. Indeed, FDG-PET/CT demonstrates demonstrates equal sensitivity and better specificity even in the deep nodal regions of the abdomen and the mediastinum [6,7]. As a result of these improved capabilities and the clinical value they provide, over the last five years there has been a 16-fold increase in number of FDG-PET/CT examinations within the United States. Unfortunately, this increase in imaging demand has coincided with a reduction in both imaging reimbursements and radiologist specialist availability [8]. While machine learning could provide clinical value within FDG-PET/CT workflows, this potential remains largely untapped due to a lack of properly annotated data. While reading an FDG- PET/CT scan, a radiologist will typically report the anatomical regions with abnormal metabolic activity. This task, which we call abnormality localization, is clinically important in FDG-PET/CT studies. However, manually labeling a dataset for abnormality localization is particularly painstak- 2 ing because it requires either performing pixel-level annotations or resolving abnormalities into a hierarchy of anatomical regions. Existing approaches rely on clinical experts manually segmenting and annotating over ten thousand training examples [9]. In practice, radiologists report abnormali- ties in many regions, even those in which abnormalities are exceedingly rare. Collecting a sufficient number of training examples to train machine learning models that can detect abnormalities even in these low-prevalence regions can be extremely difficult. To address these challenges, we present a machine learning framework for training abnor- mality localization models for large medical images without manual labeling or segmentation. In particular, (1) we introduce a weak-supervision framework for rapidly and systematically generat- ing abnormality localization labels from radiology reports describing FDG-PET/CT findings, and (2) we use multi-task learning to achieve high levels of performance even on rare FDG-PET/CT diagnoses for which we have little data. Our framework enables anatomically-resolved abnormal- ity detection in FDG-PET/CT scans and provides clinical value in four ways: (1) by automatically screening negatives and enabling the prioritization of exams, (2) by aiding radiologists in localizing abnormalities within the scan, which is important for quickly and correctly composing reports, (3) by allowing clinicians to quickly identify rare cases, and (4) by yielding a learned representation of each scan that can enable mortality prediction, which is important for palliative care planning. We first curate a dataset comprised of 6,530 FDG-PET/CT scans, their accompanying re- ports, and prospective normal-versus-abnormal clinical summary codes recorded by the radiologist at the time of interpretation, similar to those reported by [10]. We leverage a custom ontology of 96 anatomical regions (e.g. left lung, liver, inguinal lymph nodes) to train an expressive language model that extracts from each report the anatomical regions with abnormal FDG uptake. Using the language model predictions as weak labels in the style of [11], we then train a 3D, multi-task con- volutional neural network (CNN) to detect abnormal FDG uptake in the PET/CT scan. The model localizes abnormalities by making binary abnormality predictions for a comprehensive set of 26 anatomical regions. We evaluate our approach on a held-out dataset of 814 FDG-PET/CT scans manually annotated by radiologists. While expensive, this fine-grained manual labeling of the test set helps mitigate hidden stratification effects in our analysis [5]. We find that models trained with our multi-task weak supervision framework can detect lymph node, lung and liver abnormalities, three of the pathologies with the highest incidence in our dataset, with median areas under the 3 ROC curve of 87%, 85% and 92% respectively. Worklist prioritization using these models would yield over a 25% increase in the sensitivity of the first 75% of exams in the worklist. We further show that training our model in a multi-task fashion (i.e. training it to detect abnormalities in many parts of the body simultaneously) yields a powerful learned representation of a whole-body FDG-PET/CT scan that can facilitate training models for more difficult clinical tasks. For instance, this representation allows us to detect rare abnormalities for which we have very little data. When compared to pre-training on the out-of-domain Kinetics action classification dataset [12], pre-training on weakly supervised multi-task abnormality localization enables a 20- point improvement in detecting abnormalities in the adrenal gland (AUROC 78%), and a 22-point improvement in detecting abnormalities in the pancreas (AUROC 78%), two tasks for which we have fewer than two hundred positive training examples. Our multi-task representation also enables a model capable of predicting mortality within a 90-day period (AUROC 80%), a critical challenge for hospitals looking to identify patients that could benefit from palliative care. Unlike existing mortality prediction models that use heterogeneous EHR data, ours makes predictions based only on the raw data of a single scan [13, 14]. We evaluate the benefits of our supervision and modeling choices via direct comparison to strong baselines. We find that our language-model labeling framework provides an average improvement of 28% in F1 score with respect to a rule-based approach similar to the one used by [15]. Further, we find that a multi-task, weakly supervised model fine-tuned to make aggregate normal-abnormal predictions is statistically superior to a normal-abnormal classifier trained on the provided clinical summary codes. These results suggest that granular, multi-task weak supervision is more performant than coarse, single-task full supervision. Additionally, we find that the com- putational cost of fine-tuning abnormality detection models is halved when in-domain multi-task pretraining is used instead of pretraining on an out-of-domain dataset like Kinetics [12]. Finally, we evaluate the utility of our proposed multi-task attention mechanism by comparing performance to a simple sum reduction. We find that our approach not only improves performance, but also produces attention distributions that align well with human anatomy. These distributions improve model interpretability
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-