Answering Complex Questions by Joiningmulti-Document Evidence

Answering Complex Questions by Joiningmulti-Document Evidence

Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs Xiaolu Lu∗ Soumajit Pramanik Rishiraj Saha Roy RMIT University, Australia MPI for Informatics, Germany MPI for Informatics, Germany [email protected] [email protected] [email protected] Abdalghani Abujabal Yafang Wang Gerhard Weikum Amazon Alexa, Germany Ant Financial Services Group, China MPI for Informatics, Germany [email protected] [email protected] [email protected] ABSTRACT 51]. Earlier approaches for such QA, up to when IBM Watson [25] Direct answering of questions that involve multiple entities and won the Jeopardy! quiz show, have mostly tapped into textual relations is a challenge for text-based QA. This problem is most pro- sources (including Wikipedia articles) using passage retrieval and nounced when answers can be found only by joining evidence from other techniques [49, 62]. In the last few years, the paradigm of multiple documents. Curated knowledge graphs (KGs) may yield translating questions into formal queries over structured knowledge good answers, but are limited by their inherent incompleteness and graphs (KGs), also known as knowledge bases (KBs), and databases potential staleness. This paper presents QUEST, a method that can (DBs), including Linked Open Data, has become prevalent [18, 59]. answer complex questions directly from textual sources on-the-fly, QA over structured data translates the terms in a question into by computing similarity joins over partial results from different doc- the vocabulary of the underlying KG or DB: entity names, semantic uments. Our method is completely unsupervised, avoiding training- types, and predicate names for attributes and relations. State-of- data bottlenecks and being able to cope with rapidly evolving ad the-art systems [1, 6, 7, 24, 69] perform well for simple questions hoc topics and formulation style in user questions. QUEST builds a that involve a few predicates around a single target entity (or a noisy quasi KG with node and edge weights, consisting of dynam- qualifying entity list). However, for complex questions that refer ically retrieved entity names and relational phrases. It augments to multiple entities and multiple relationships between them, the this graph with types and semantic alignments, and computes the question-to-query translation is very challenging and becomes the best answers by an algorithm for Group Steiner Trees. We evaluate make-or-break point. As an example, consider the question: QUEST on benchmarks of complex questions, and show that it “footballers of African descent who played in the FIFA 2018 final and substantially outperforms state-of-the-art baselines. the Euro 2016 final?” ACM Reference Format: A high-quality, up-to-date KG would have answers like ‘Samuel Xiaolu Lu, Soumajit Pramanik, Rishiraj Saha Roy, Abdalghani Abujabal, Umtiti’, ‘Paul Pogba’ or ‘Blaise Matuidi’. However, this works only Yafang Wang, and Gerhard Weikum. 2019. Answering Complex Questions with a perfect mapping of question terms onto KG-predicates like by Joining Multi-Document Evidence with Quasi Knowledge Graphs. In bornIn, playedFor, inFinal, etc. This strong assumption is rarely Proceedings of ACM Conference (Conference’17). ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn satisfied for such complex questions. Moreover, if the KG misses some of the relevant pieces, for example, that a football team played 1 INTRODUCTION in a final (without winning it), then the entire query will fail. 1.1 Motivation 1.2 State of the Art and its Limitations Question answering (QA) over the Web and derived sources of QA over KGs. State-of-the-art work on QA over KGs has several knowledge, has been well-researched [12, 24, 40]. Studies show that critical limitations: (i) The question-to-query translation is brittle arXiv:1908.00469v2 [cs.IR] 8 Aug 2019 many Web search queries have the form of questions [64]; and this and tends to be infeasible for complex questions. (ii) Computing fraction is increasing as voice search becomes ubiquitous [29]. The good answers depends on the completeness and freshness of the focus of this paper is on providing direct answers to fact-centric ad underlying KG, but no KG covers everything and keeps up with hoc questions, posed in natural language, or in telegraphic form [35, the fast pace of real-world changes. (iii) The generation of good ∗Part of this work was done during the author’s internship at the MPI for Informatics. queries is tied to a specific language (usually English) and style (typically full interrogative sentences), and does not generalize Permission to make digital or hard copies of all or part of this work for personal or to arbitrary languages and language registers. (iv) Likewise, the classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation translation procedure is sensitive to the choice of the underlying on the first page. Copyrights for components of this work owned by others than ACM KG/DB source, and cannot handle ad hoc choices of several sources must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, that are seen only at QA-execution time. to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. QA over text. State-of-the-art methods in this realm face major Conference’17, July 2017, Washington, DC, USA obstacles: (i) To retrieve relevant passages for answer extraction, all © 2019 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 significant question terms must be matched inthe same document, https://doi.org/10.1145/nnnnnnn.nnnnnnn ideally within short proximity. For example, if a QA system finds 1 answers. This computation is carried out over a weighted graph, with weights based on matching scores and extraction confidences. Finally, answers are filtered and ranked by whether they are com- patible with the question’s lexical answer type and other criteria. In Fig. 1, all red nodes and edges, and all blue nodes and edges, con- stitute two GSTs, respectively yielding the answers ‘Samuel Umtiti’ and ‘Blaise Matuidi’ (underlined). Unlike most QA systems where correct entity and relation linking are major bottlenecks for success, QUEST does not need any explicit disambiguation of the question concepts, and instead harnesses the effect that GSTs themselves establish a common context for ambiguous names. Thus, finding a GST serves as a joint disambiguation step for relevant entities, relations, and types, as different senses of polysemous concepts are unlikely to share several inter-connections. Notably, the Steiner Figure 1: A quasi KG for our running example question. tree provides explainable insights into how an answer is derived. It is the nature of ad hoc Web questions that dynamically re- players in the 2018 FIFA World Cup Final and the UEFA Euro Cup trieved documents contain a substantial amount of uninformative 2016 Final in different news articles, and information on their birth- and misleading content. Instead of attempting to perfectly clean places from Wikipedia, there is no single text passage for proper this input upfront, our rationale is to cope with this noise in the answering. (ii) The alternative of fragmenting complex questions answer computation rather than through tedious efforts on entity into simpler sub-questions requires syntactic decomposition pat- disambiguation and relation canonicalization in each step. terns that break with ungrammatical constructs, and also a way GST algorithms have been used for keyword search over rela- of stitching sub-results together. Other than for special cases like tional graphs [8, 13, 71], but they work for a simple class of keyword temporal modifiers in questions, this is beyond the scope of today’s queries with the sole condition of nodes being related. For QA, the systems. (iii) Modern approaches that leverage deep learning criti- problem is much more difficult as the input is a full question that cally rely on training data, which is not easily available for complex contains multiple conditions with different entities and predicates. questions, and have concerns on interpretability. (iv) Recent works on text-QA considered scenarios where a question is given with a Contribution. The salient points of this work are: specific set of documents that contain the answer(s). • QUEST is a novel method that computes direct answers to com- This paper overcomes these limitations by providing a novel plex questions by dynamically tapping arbitrary text sources unsupervised method for combining answer evidence from multiple and joining sub-results from multiple documents. In contrast to documents retrieved dynamically, joined together via KG-style neural QA methods that rely on substantial amounts of training relationships automatically gathered from text sources. data, QUEST is unsupervised, avoiding training bottlenecks and the potential bias towards specific benchmarks. 1.3 Approach and Contribution • QUEST combines the versatility of text-based QA with graph- structure awareness of KG-QA, overcoming the problems of We present a method and system, called QUEST (for “QUEstion incompleteness, staleness, and brittleness of QA over KGs alone. answering with Steiner Trees”), that taps into text sources for an- • We devise advanced graph algorithms for computing answers

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us