Answering Complex Open-Domain Questions Through Iterative Query Generation
Total Page:16
File Type:pdf, Size:1020Kb
Answering Complex Open-domain Questions Through Iterative Query Generation Peng Qiy Xiaowen Lin∗† Leo Mehr∗† Zijian Wang∗‡ Christopher D. Manningy y Computer Science Department z Symbolic Systems Program Stanford University fpengqi, veralin, leomehr, zijwang, [email protected] Q: Which novel by the author of Search Results with queries derived from the Abstract original question “Armada” will be adapted as a feature film by Steven Spielberg? Which novel by the author of “Armada” will be It is challenging for current one-step retrieve- adapted as a feature film by Steven Spielberg? and-read question answering (QA) systems to A: Ready Player One The collector The Color Purple (film) answer questions like “Which novel by the au- Armada (novel) Kim Wozencraft thor of ‘Armada’ will be adapted as a feature Armada is a science fiction novel by novel by the author of “Armada” Ernest Cline, … Armada (novel) film by Steven Spielberg?” because the ques- Author, Author (novel) tion seldom contains retrievable clues about Ernest Cline Armada the missing entity (here, the author). Answer- Ernest Christy Cline … co-wrote the Armada author screenplay for the film adaptation of Armada ing such a question requires multi-hop reason- Ready Player One, directed by Armada Centre ing where one must gather information about Stephen Spielberg. Halley Armada the missing entity (or facts) to proceed with Figure 1: An example of an open-domain multi-hop further reasoning. We present GOLDEN (Gold Entity) Retriever, which iterates between read- question from the HOTPOTQA dev set, where “Ernest ing context and retrieving more supporting Cline” is the missing entity. Note from the search re- documents to answer open-domain multi-hop sults that it cannot be easily retrieved based on merely questions. Instead of using opaque and com- the question. (Best viewed in color) putationally expensive neural retrieval models, GOLDEN Retriever generates natural language Fueled by the recently proposed large-scale QA search queries given the question and available context, and leverages off-the-shelf informa- datasets such as SQuAD (Rajpurkar et al., 2016, tion retrieval systems to query for missing en- 2018) and TriviaQA (Joshi et al., 2017), much tities. This allows GOLDEN Retriever to scale progress has been made in open-domain question up efficiently for open-domain multi-hop rea- answering. Chen et al.(2017) proposed a two- soning while maintaining interpretability. We stage approach of retrieving relevant content with evaluate GOLDEN Retriever on the recently the question, then reading the paragraphs returned proposed open-domain multi-hop QA dataset, by the information retrieval (IR) component to ar- HOTPOTQA, and demonstrate that it outper- retrieve and read forms the best previously published model de- rive at the final answer. This “ ” spite not using pretrained language models approach has since been adopted and extended in such as BERT. various open-domain QA systems (Nishida et al., 2018; Kratzwald and Feuerriegel, 2018), but it is arXiv:1910.07000v1 [cs.CL] 15 Oct 2019 inherently limited to answering questions that do 1 Introduction not require multi-hop/multi-step reasoning. This Open-domain question answering (QA) is an im- is because for many multi-hop questions, not all portant means for us to make use of knowledge the relevant context can be obtained in a single re- in large text corpora and enables diverse queries trieval step (e.g., “Ernest Cline” in Figure1). without requiring a knowledge schema ahead of More recently, the emergence of multi-hop time. Enabling such systems to perform multi- question answering datasets such as QAngaroo step inferencecan further expand our capability to (Welbl et al., 2018) and HOTPOTQA (Yang et al., explore the knowledge in these corpora (e.g., see 2018) has sparked interest in multi-hop QA in the Figure1). research community. Designed to be more chal- ∗Equal contribution, order decided by a random number lenging than SQuAD-like datasets, they feature generator. questions that require context of more than one document to answer, testing QA systems’ abili- They built a simple inverted index lookup with ties to infer the answer in the presence of multi- TF-IDF on the English Wikipedia, and used the ple pieces of evidence and to efficiently find the question as the query to retrieve top 5 results for evidence in a large pool of candidate documents. a reader model to produce answers with. Recent However, since these datasets are still relatively work on open-domain question answering largely new, most of the existing research focuses on the follow this retrieve-and-read approach, and focus few-document setting where a relatively small set on improving the information retrieval component of context documents is given, which is guaran- with question answering performance in consider- teed to contain the “gold” context documents, all ation (Nishida et al., 2018; Kratzwald and Feuer- those from which the answer comes (De Cao et al., riegel, 2018; Nogueira et al., 2019). However, 2019; Zhong et al., 2019). these one-step retrieve-and-read approaches are In this paper, we present GOLDEN (Gold En- fundamentally ill-equipped to address questions tity) Retriever. Rather than relying purely on the that require multi-hop reasoning, especially when original question to retrieve passages, the central necessary evidence is not readily retrievable with innovation is that at each step the model also uses the question. IR results from previous hops of reasoning to gen- erate a new natural language query and retrieve Multi-hop QA datasets QAngaroo (Welbl new evidence to answer the original question. For et al., 2018) and HOTPOTQA (Yang et al., 2018) the example in Figure1,G OLDEN would first gen- are among the largest-scale multi-hop QA datasets erate a query to retrieve Armada (novel) based on to date. While the former is constructed around the question, then query for Ernest Cline based on a knowledge base and the knowledge schema newly gained knowledge in that article. This al- therein, the latter adopts a free-form question gen- lows GOLDEN to leverage off-the-shelf, general- eration process in crowdsourcing and span-based purpose IR systems to scale open-domain multi- evaluation. Both datasets feature a few-document hop reasoning to millions of documents efficiently, setting where the gold supporting facts are pro- and to do so in an interpretable manner. Com- vided along with a small set of distractors to ease bined with a QA module that extends BiDAF++ the computational burden. However, researchers (Clark and Gardner, 2017), our final system out- have shown that this sometimes results in game- performs the best previously published system on able contexts, and thus does not always test the the open-domain (fullwiki) setting of HOTPOTQA model’s capability of multi-hop reasoning (Chen without using powerful pretrained language mod- and Durrett, 2019; Min et al., 2019a). Therefore, els like BERT (Devlin et al., 2019). in this work, we focus on the fullwiki setting of The main contributions of this paper are: (a) HOTPOTQA, which features a truly open-domain a novel iterative retrieve-and-read framework ca- setting with more diverse questions. pable of multi-hop reasoning in open-domain QA1 (b) a natural language query generation Multi-hop QA systems At a broader level, the approach that guarantees interpretability in the need for multi-step searches, query task decom- multi-hop evidence gathering process; (c) an ef- position, and subtask extraction has been clearly ficient training procedure to enable query genera- recognized in the IR community (Hassan Awadal- tion with minimal supervision signal that signifi- lah et al., 2014; Mehrotra et al., 2016; Mehrotra cantly boosts recall of gold supporting documents and Yilmaz, 2017), but multi-hop QA has only in retrieval. recently been studied closely with the release of large-scale datasets. Much research has focused 2 Related Work on enabling multi-hop reasoning in question an- swering models in the few-document setting, e.g., Open-domain question answering (QA) In- by modeling entity graphs (De Cao et al., 2019) spired by the series of TREC QA competitions,2 or scoring answer candidates against the context Chen et al.(2017) were among the first to adapt (Zhong et al., 2019). These approaches, however, neural QA models to the open-domain setting. suffer from scalability issues when the number of supporting documents and/or answer candidates 1Code and pretrained models available at https:// github.com/qipeng/golden-retriever grow beyond a few dozen. Ding et al.(2019) ap- 2http://trec.nist.gov/data/qamain.html ply entity graph modeling to HOTPOTQA, where they expand a small entity graph starting from events), but these connections do not necessarily the question to arrive at the context for the QA conform to any predefined knowledge schema. model. However, centered around entity names, We contrast this to what we call the few- this model risks missing purely descriptive clues document setting of multi-hop QA, where the in the question. Das et al.(2019) propose a neu- QA system is presented with a small set of 0 0 ral retriever trained with distant supervision to documents Dfew-doc = fd1; : : : ; dS; d1; : : : ; dDg, 0 0 bias towards paragraphs containing answers to the where d1; : : : ; dD comprise a small set of “dis- given questions, which is then used in a multi-step tractor” documents that test whether the system reader-reasoner framework. This does not funda- is able to pick out the correct set of supporting mentally address the discoverability issue in open- documents in the presence of noise. This setting domain multi-hop QA, however, because usually is suitable for testing QA systems’ ability to per- not all the evidence can be directly retrieved with form multi-hop reasoning given the gold support- the question.