Exploiting Sentence Embedding for Medical Question Answering

Exploiting Sentence Embedding for Medical Question Answering

The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Exploiting Sentence Embedding for Medical Question Answering Yu Hao,1∗ Xien Liu,1∗ Ji Wu,1 Ping Lv2 1Department of Electronic Engineering, Tsinghua University, Beijing, China [email protected], fxeliu, wuji ee [email protected] 2Tsinghua-iFlytek Joint Laboratory, iFlytek Research, Beijing, China luping [email protected] Abstract semantic/linguistic properties (Zhu, Li, and Melo 2018; Baroni et al. 2018) obtained within sentence embeddings, Despite the great success of word embedding, sentence em- and 2) designing learning methods to produce effective sen- bedding remains a not-well-solved problem. In this paper, we present a supervised learning framework to exploit sen- tence embeddings. All of the learning methods can be gener- tence embedding for the medical question answering task. ally categorized into two groups: 1) obtaining universal sen- The learning framework consists of two main parts: 1) a sen- tence embeddings with an unsupervised learning framework tence embedding producing module, and 2) a scoring mod- (Hill, Cho, and Korhonen 2016; Kiros et al. 2015; Le and ule. The former is developed with contextual self-attention Mikolov 2014), and 2) producing task-dependent sentence and multi-scale techniques to encode a sentence into an em- embeddings with a supervised learning framework (Palangi bedding tensor. This module is shortly called Contextual et al. 2016; Tan et al. 2016; Cheng, Dong, and Lapata 2016; self-Attention Multi-scale Sentence Embedding (CAMSE). Lin et al. 2017). The latter employs two scoring strategies: Semantic Match- ing Scoring (SMS) and Semantic Association Scoring (SAS). Though plenty of successful deep learning models are SMS measures similarity while SAS captures association be- built at word level (word embedding), there are still some tween sentence pairs: a medical question concatenated with different and valuable merits obtained within sentence em- a candidate choice, and a piece of corresponding supportive bedding. For example, most reading comprehension mod- evidence. The proposed framework is examined by two Medi- els calculate a pairwise similarity at word level to extract cal Question Answering(MedicalQA) datasets which are col- keywords in the answer. However, these fine-grained mod- lected from real-world applications: medical exam and clin- els maybe misled under certain circumstances, such as long ical diagnosis based on electronic medical records (EMR). paragraph with lots of single noisy words which are similar The comparison results show that our proposed framework to those words that appear in the question but unrelated to achieved significant improvements compared to competitive the question answering. Furthermore, models built on sen- baseline approaches. Additionally, a series of controlled ex- periments are also conducted to illustrate that the multi-scale tence embeddings sometimes can be more interpretable. For strategy and the contextual self-attention layer play impor- example, we can encode a sentence into several embeddings tant roles for producing effective sentence embedding, and to capture different semantic aspects of the sentence. As we the two kinds of scoring strategies are highly complementary known, sometimes interpretability becomes more crucial for to each other for question answering problems. certain real applications, such as tasks from the medical do- main. Introduction In this paper, we focus on developing supervised sentence embedding learning framework for solving medical question Embedding learning in word-level has achieved much answering problems. To maintain model interpretation, we progress(Bengio et al. 2003; Mikolov et al. 2013b; Mikolov also adopt the self-attention structure proposed by (Lin et et al. 2013a; Pennington, Socher, and Manning 2014) and al. 2017) to produce sentence embeddings. The only differ- the pre-trained word embeddings have been almost a stan- ence is that a contextual layer is used in conjunction with the dard input to a certain deep learning framework for solv- self-attention. Under certain circumstances, the valuable in- ing upstream applications, such as reading comprehension formation resides in a unit whose size is between word and tasks (Raison et al. 2018; Wang et al. 2018; Zhang et al. sentence. Take medical text for an instance, a large amount 2018; Chen et al. 2017; Cheng, Dong, and Lapata 2016; of the medical terminologies are entities consist of several Dhingra et al. 2016; Seo et al. 2016). However, learning sequential words like Acute Upper Respiratory Infection. It embeddings at sentence/document level is still a very dif- requires a flexible scale between word and sentence-level ficult task, not well solved at present. The study ofsen- to encode such sequential words as a single unit and as- tence embedding runs along the two lines: 1) exploiting sign words in the unit with similar attention, instead of treat- ∗These two authors contribute equally to this work. ing them like a bag of unrelated words, which can be mis- Copyright ⃝c 2019, Association for the Advancement of Artificial led easily by noisy words in long paragraphs when com- Intelligence (www.aaai.org). All rights reserved. puting pairwise word similarities. For example, sentences 938 that include Acute Gastroenteritis and Acute Cholecystitis Medical Licensing Examination in China (NMLEC), and may be considered to some extent related to question that the second task is Clinical Diagnosis based on Electronic describes Acute Upper Respiratory Infection because Acute Medical Records(CD-EMR). appears in all of them, even though these sentences con- centrate on totally different diseases. Therefore we propose MedicalQA#1:NMLEC contextual self-attention and multi-scale strategy to produce sentence embedding tensor that captures multi-scale infor- Data source NMLEC is an annual certification exam mation from the sentence. The contextual attention detects which comprehensively evaluates doctors’ medical knowl- meaningful word blocks(entities) and assigns words in the edge and clinical skills. The General Written Test part of same block with similar attention value. The multi-scale al- NMLEC consists of 600 multiple choice questions. Each lows the model to directly encode sequential words as an in- question is given 5 candidate choices (one example is pre- tegral unit and extracts informative entities or phrases. The sented in Fig.1), and only one of them is the correct answer. contextual attention is a soft assignment of attention value, The exam examines more than 20 medical subjects while the multi-scale strategy is a hard binding of sequential words. Even though being able to preserve more informa- Question: A male patient, aged 20 years old. He had diarrhea 3 weeks ago, and 2 days later the symptoms improved but he did not mind. 1 days tion by producing a tensor, Lin et al. simply calculate sim- ago in the morning he felt weakness in limbs and pains in double legs. ilarities between corresponding semantic subspaces and fail Gradually illness turned more serious. His family found his double eyelid to capture the association between different subspaces. In an cannot fully close, no dysphagia, and urinary and stool were normal. Admission examination: clear consciousness, speak normally, bilateral attempt to fully exploit the abundant information lies in the peripheral facial paralysis, limb muscle strength Ⅱ, low muscle tension, no tensor, we propose two scoring strategies: Semantic Match- obvious sensory disturbance. This patient is most likely diagnosed as: ( ) ing Scoring(SMS) and Semantic Association Scoring(SAS). (A) Guillain-Barre syndrome (B) Parkinson's disease In the rest of this paper, we will first define the med- (C) Purulent meningitis (D) Myasthenia gravis ical question answering task and introduce two datasets. (E) Acute myelitis Then, the supervised sentence embedding learning frame- work (consisting of sentence embedding producing module Figure 1: An example question from the General Written CAMSE and scoring module) are introduced, and a series of Test part of NMLEC. comparison results and some crucial analysis are presented. MedicalQA Task Description Training/test set We collected 10 suites of the exam, to- tally 6,000 questions, as the test set. To avoid test ques- Here, we define the MedicalQA task with three components: tions appearing in the training set with minor variance, we - Question: a short paragraph/document in text describing dropped the training questions which are very similar to the a medical problem. questions from the test set, resulting in totally 250,000 med- - Candidate choices : multiple candidate choices are given ical questions as the training set. The similarity of two ques- for each question, and only one is the correct answer. tions is measured by comparing Levenshtein distance(Lev- enshtein 1966) with a threshold ratio of 0.8. - Evidence documents: for each candidate choice, a collec- tion of short documents/paragraphs1 is given as evidence Evidence documents The General Written Test of NM- to support the choice as the right answer. LEC mainly examines medical knowledge from medical textbooks. Therefore, we first collected totally more than 30 The goal of MedicalQA is to determine the correct answer publications (including textbooks, guidebooks etc.) as evi- based on corresponding evidence documents with an appro- dence source. Then we produced evidence documents from priate scoring manner. the evidence

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us