
Question Answering on Freebase via Relation Extraction and Textual Evidence 1 2 1, 3 1 Kun Xu , Siva Reddy , Yansong Feng ∗, Songfang Huang and Dongyan Zhao 1Institute of Computer Science & Technology, Peking University, Beijing, China 2School of Informatics, University of Edinburgh, UK 3IBM China Research Lab, Beijing, China xukun, fengyansong, zhaody @pku.edu.cn { [email protected]} [email protected] Abstract tures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches Existing knowledge-based question an- between grammar predicted structures and KB swering systems often rely on small an- structure is also a common problem (Kwiatkowski notated training data. While shallow meth- et al., 2013; Berant and Liang, 2014; Reddy et al., ods like relation extraction are robust to 2014). data scarcity, they are less expressive than On the other hand, instead of building a for- the deep meaning representation methods mal meaning representation, information extraction like semantic parsing, thereby failing at an- methods retrieve a set of candidate answers from swering questions involving multiple con- KB using relation extraction (Yao and Van Durme, straints. Here we alleviate this problem by 2014; Yih et al., 2014; Yao, 2015; Bast and Hauss- empowering a relation extraction method mann, 2015) or distributed representations (Bordes with additional evidence from Wikipedia. et al., 2014; Dong et al., 2015). Designing large We first present a neural network based re- training datasets for these methods is relatively easy lation extractor to retrieve the candidate (Yao and Van Durme, 2014; Bordes et al., 2015; answers from Freebase, and then infer over Serban et al., 2016). These methods are often good Wikipedia to validate these answers. Ex- at producing an answer irrespective of their correct- periments on the WebQuestions question ness. However, handling compositional questions answering dataset show that our method that involve multiple entities and relations, still re- achieves an F1 of 53.3%, a substantial im- mains a challenge. Consider the question what provement over the state-of-the-art. mountain is the highest in north america. Relation extraction methods typically answer with all the 1 Introduction mountains in North America because of the lack of Since the advent of large structured knowledge sophisticated representation for the mathematical bases (KBs) like Freebase (Bollacker et al., 2008), function highest. To select the correct answer, one YAGO (Suchanek et al., 2007) and DBpedia (Auer has to retrieve all the heights of the mountains, and et al., 2007), answering natural language questions sort them in descending order, and then pick the using those structured KBs, also known as KB- first entry. We propose a method based on textual based question answering (or KB-QA), is attract- evidence which can answer such questions without ing increasing research efforts from both natural solving the mathematic functions implicitly. language processing and information retrieval com- Knowledge bases like Freebase capture real munities. world facts, and Web resources like Wikipedia pro- The state-of-the-art methods for this task can vide a large repository of sentences that validate be roughly categorized into two streams. The first or support these facts. For example, a sentence is based on semantic parsing (Berant et al., 2013; in Wikipedia says, Denali (also known as Mount Kwiatkowski et al., 2013), which typically learns McKinley, its former official name) is the highest a grammar that can parse natural language to a so- mountain peak in North America, with a summit phisticated meaning representation language. But elevation of 20,310 feet (6,190 m) above sea level. such sophistication requires a lot of annotated train- To answer our example question against a KB us- ing examples that contains compositional struc- ing a relation extractor, we can use this sentence 2326 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2326–2336, Berlin, Germany, August 7-12, 2016. c 2016 Association for Computational Linguistics as external evidence, filter out wrong answers and who did shaq first play for pick the correct one. Using textual evidence not only mitigates rep- KB-QA resentational issues in relation extraction, but also Entity Linking Relation Extraction alleviates the data scarcity problem to some extent. shaq: m.012xdf sports.pro_athlete.teams..sports.sports_team_roster.team Consider the question, who was queen isabella’s shaq: m.05n7bp basketball.player.statistics..basketball.player_stats.team shaq: m.06_ttvh …… mother. Answering this question involves predict- ing two constraints hidden in the word mother. One Joint Inference constraint is that the answer should be the parent of Isabella, and the other is that the answer’s gen- m.012xdf sports.pro_athlete.teams..sports.sports_team_roster.team Los Angeles Lakers, der is female. Such words with multiple latent Wikipedia Dump Boston Celtics, Freebase (with CoreNLP annotations) Orlando Magic, constraints have been a pain-in-the-neck for both Miami Heat semantic parsing and relation extraction, and re- quires larger training data (this phenomenon is Answer Refinement coined as sub-lexical compositionality by Wang Los Angeles Lakers Boston Celtics Orlando Magic et al. (2015)). Most systems are good at trigger- Shaquille O'Neal Shaquille O'Neal Shaquille O'Neal O'Neal was drafted O'Neal signed O'Neal played for as a free agent with the Los Angeles Lakers the Boston Celtics in the 2010-11 season before by the Orlando Magic with the first overall pick ing the parent constraint, but fail on the other, i.e., retiring in the 1992 NBA draft the answer entity should be female. Whereas the O’Neal signed as a free agent O’Neal played for the Boston Celtics O’Neal was drafted by the Orlando textual evidence from Wikipedia, . her mother with the Los Angeles Lakers in the 2010-11 season before retiring Magic with the first overall pick in the 1992 NBA draft was Isabella of Barcelos . , can act as a further Refinement Model constraint to answer the question correctly. We present a novel method for question answer- - - + ing which infers on both structured and unstruc- Orlando Magic tured resources. Our method consists of two main steps as outlined in 2. In the first step we extract § Figure 1: An illustration of our method to find answers for a given question using a structured KB answers for the given question who did shaq first (here Freebase) by jointly performing entity link- play for. ing and relation extraction ( 3). In the next step § we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong results to find the best entity-relation configura- answers and select the correct ones ( 4). Our evalu- § tion which will produce a list of candidate answer ation results on a benchmark dataset WebQuestions entities. In the step 2, we refine these candidate show that our method outperforms existing state-of- answers by applying an answer refinement model the-art models. Details of our experimental setup which takes the Wikipedia page of the topic entity and results are presented in 5. Our code, data and § into consideration to filter out the wrong answers results can be downloaded from https://github. and pick the correct ones. com/syxu828/QuestionAnsweringOverFB. While the overview in Figure1 works for ques- 2 Our Method tions containing single Freebase relation, it also works for questions involving multiple Freebase Figure1 gives an overview of our method for the relations. Consider the question who plays anakin question “who did shaq first play for”. We have skywalker in star wars 1. The actors who are the an- two main steps: (1) inference on Freebase (KB-QA swers to this question should satisfy the following box); and (2) further inference on Wikipedia (An- constraints: (1) the actor played anakin skywalker; swer Refinement box). Let us take a close look into and (2) the actor played in star wars 1. Inspired step 1. Here we perform entity linking to identify by Bao et al. (2014), we design a dependency tree- a topic entity in the question and its possible Free- based method to handle such multi-relational ques- base entities. We employ a relation extractor to tions. We first decompose the original question predict the potential Freebase relations that could into a set of sub-questions using syntactic patterns exist between the entities in the question and the which are listed in Appendix. The final answer set answer entities. Later we perform a joint inference of the original question is obtained by intersecting step over the entity linking and relation extraction the answer sets of all its sub-questions. For the 2327 example question, the sub-questions are who plays dobj nsubj QPRG anakin skywalker and who plays in star wars 1. aux play for These sub-questions are answered separately over [Who] did [shaq] first Freebase and Wikipedia, and the intersection of their answers to these sub-questions is treated as Feature Extraction dobj play nsubj did first play for the final answer. Word We Representation 3 Inference on Freebase Convolution W1 Given a sub-question, we assume the question word1 that represents the answer has a distinct KB relation r with an entity e found in the question, max( . ) and predict a single KB triple (e, r, ?) for each sub- question (here ? stands for the answer entities). The Feature Vector QA problem is thus formulated as an information W2 extraction problem that involves two sub-tasks, i.e., Output entity linking and relation extraction. We first in- W3 troduce these two components, and then present a KB relations Softmax joint inference procedure which further boosts the overall performance. Figure 2: Overview of the multi-channel convolu- tional neural network for relation extraction. We is 3.1 Entity Linking the word embedding matrix, W1 is the convolution For each question, we use hand-built sequences matrix, W2 is the activation matrix and W3 is the of part-of-speech categories to identify all possi- classification matrix.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-