A Unified Model for Document-Based Question Answering Based On

A Unified Model for Document-Based Question Answering Based On

The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) A Unified Model for Document-Based Question Answering Based on Human-Like Reading Strategy Weikang Li, Wei Li, Yunfang Wu∗ Key Laboratory of Computational Linguistics, Peking University, MOE, China {wavejkd,liweitj47,wuyf}@pku.edu.cn Abstract sentences from document as document summary for the an- swer selection task. Document-based Question Answering (DBQA) in Natural However, the simple transfer between deep learning and Language Processing (NLP) is important but difficult because DBQA is not so logical. In our opinion, DBQA is similar to of the long document and the complex question. Most of pre- vious deep learning methods mainly focus on the similarity the reading comprehension test, which is defined to test peo- computation between two sentences. However, DBQA stems ple’s comprehension of a document. In their school years, from the reading comprehension in some degree, which is students would do lots of reading comprehension tests. In originally used to train and test people’s ability of reading this paper, we provide a solution for the problem to stimu- and logical thinking. Inspired by the strategy of doing read- late men’s reading strategy of doing the tests. With the as- ing comprehension tests, we propose a unified model based sumption, the detailed reading strategy is as follow: on the human-like reading strategy. The unified model con- tains three major encoding layers that are consistent to differ- • 1. Go over the document quickly to get a general under- ent steps of the reading strategy, including the basic encoder, standing of the document; combined encoder and hierarchical encoder. We conduct ex- tensive experiments on both the English WikiQA dataset and • 2. Read the question carefully equipped with the general the Chinese dataset, and the experimental results show that understanding of the document; our unified model is effective and yields state-of-the-art re- • sults on WikiQA dataset. 3. Go back to the document with the prior knowledge of question and get the right answer. Introduction Such a reading strategy could be implemented by neural net- work models. Document-based Question Answering (DBQA) is an impor- As we know, the document in reading comprehension tant issue in natural language processing(NLP). Given a doc- tests usually has a title, which has an important impact ument and a question related to the document, the system on doing reading comprehension tests for people. Unfortu- is required to give an answer for the question. The answer nately, the title information is neglected by most researches could be a word, a text span or a sentence extracted from the on DBQA. In this paper, we use the title information (a nat- document. Table 1 gives an example of DBQA. Recently, ural document summary) as the general understanding of a more and more researches have focused on this challenging document. As for the document without title, we make many problem. attempts to get the general understanding of the document, A lot of achievements have been achieved via deep learn- by using the first sentence, the last sentence and training a ing models, which obtain better performances than tradi- LDA or LSA model to get the topic of a document. In addi- tional machine learning methods. Inspired by the great suc- tion, we have tried many ways to understand questions well cess of deep learning methods in voice and image recog- given the general understanding of the document. nition, researchers have adopted various ways to solve the At the end, we propose a unified neural network model problem of DBQA, including convolutional neural network according to the human-like reading strategy above. (CNN) (Feng et al. 2015), recurrent neural network (RNN) Our contributions in this paper can be summarized as fol- (Tan et al. 2015), Attention-Way (Seo et al. 2016) and gener- lows: ative adversarial networks (GAN) (Wang et al. 2017). Many other ways have emerged to dig out more information to • We propose a human-like reading strategy for DBQA task solve the problem of DBQA. Document summary could also which is similar to the logic of students when they do the be seen as an effective information in many NLP tasks. Choi test of reading comprehension. et al. (2017) and Miller et al. (2016) used the most related • Based on the reading strategy, we make a good combina- Copyright c 2018, Association for the Advancement of Artificial tion of general understanding of both document and ques- Intelligence (www.aaai.org). All rights reserved. tion. 604 Document Title Uncle Sam Document J. M. Flagg ’s 1917 poster, based on the original British Lord Kitchener poster of three years earlier, was used to recruit soldiers for both World War I and World War II. ...... Un- cle Sam (initials U.S.) is a common national personification of the American government that, according to legend, came into use during the War of 1812 and was supposedly named for Samuel Wilson. It is not clear whether this reference is to Uncle Sam as a metaphor for the United States. ...... Question what does uncle sam represent to the American people ? Answer(Sentence) Uncle Sam (initials U.S.) is a common national personification of the American govern- ment that, according to legend, came into use during the War of 1812 and was supposedly named for Samuel Wilson. Answer(Span) a common national personification of the American government. Answer(Word) national personification. Table 1: An Outline of DBQA • We propose a unified neural network model which is suit- Pang et al. (2016) built hierarchical convolution layers on able for our reading strategy to tackle the problem of the word similarity matrix between sentences, and Yin and DBQA. Schutze¨ (2015) proposed MultiGranCNN to integrate mul- tiple granularity levels of matching models. The Represen- We conduct experiments on the English WikiQA dataset tation (MPSR) model (Wan et al. 2016) employed LSTM (Yang, Yih, and Meek 2015) and the Chinese DBQA dataset and interactive tensor to capture matching features with po- (Duan 2016). On the WikiQA dataset, our model obtains a sitional local context. For both levels of matching strategies, MAP of 0.754, which outperforms the best previous method the ways of computing similarity between two sentences are by 1.1 MAP points. On the Chinese DBQA dataset, our similar. The most popular methods are cosine similarity (Tan model gets comparable results without using any features. et al. 2016), element-wise product (Seo et al. 2016) and ten- sor computation (Bowman et al. 2015). Related Work As a task to train people’s reading and understanding Reading documents and being able to answer related ques- skills, DBQA is more complex, logical and skillful than a tions by machines is a useful and meaningful issue. How- simple comparison of the similarity between two sentences. ever, it is still an unsolved challenge. DBQA has several dif- We will imitate people’s reading strategy of doing reading ferent answer types, as outlined in Table 1. Our work mainly comprehension tests via the neural network. focuses on the form in which answer is a whole sentence. Attempts have also been made to study how people read. As we know, many NLP problems involve matching two Masson (1983) conducted studies on how people answer or more sequences to make a decision. For DBQA, some re- questions by first skimming the document, identifying rel- searches also see this problem as matching two sequences evant parts, and carefully reading these parts to obtain an (question and candidate answer) to decide whether a sen- answer. Inspired by this observation, Golub et al. (2017) pro- tence from the document could answer the question. posed a coarse-to-fine model for question answering. It first In the field of sentence pairs matching, there have been selects relevant sentences and then generates an answer. Dif- various deep neural network models proposed. Two levels ferent from their method, our work mainly focuses on peo- of matching strategies are considered: the first is converting ple’s reading strategy when doing reading comprehension the whole source and target sentence into embedding vectors tests. of latent semantic spaces respectively, and then calculating Titles could be naturally used to obtain a general under- similarity score between them; the second is calculating the standing of documents in people’s reading, and summariza- similarity score among all possible local positions of source tion offers another way to get the general meaning of a doc- and target sentences, and then summarizing the local scores ument in the absence of a title. Usually, there are two ways into the final similarity score. to automatically summarize a document, including extrac- Works using the first strategy include bag of words based tive summarization and abstractive summarization. There methods (Wang et al. 2011) and CNN model (Arc-I) (Hu et has been an increase interest in document summarization al. 2014). Qiu and Huang (2015) applied a tensor transfor- over the years. mation layer on CNN based embeddings to capture the inter- Extractive summarization works with the method of find- actions between question and answer more effectively. Long ing the salient sentences in a document. The research of IBM short-term memory (LSTM) network model (Palangi et al. laboratory (Luhn 1958) worked on the frequency of words 2016) are also explored in this problem. Works using the in the text. Edmundson (1969) used the title words, core second strategy include DeepMatch (Lu and Li 2013) which phrases, key concepts, position method, which are the sur- incorporated latent topics to make the local matching struc- face level information. Gillick (2011) employed a classifica- ture sparse, Arc-II (Hu et al.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us