Knowledge-Based Question Answering as Machine Translation Junwei Baoy ∗, Nan Duanz , Ming Zhouz , Tiejun Zhaoy Harbin Institute of Technologyy Microsoft Researchz [email protected] fnanduan, [email protected] [email protected] Abstract 2013; Poon, 2013; Artzi et al., 2013; Kwiatkowski et al., 2013; Berant et al., 2013); Then, the answer- A typical knowledge-based question an- s are retrieved from existing KBs using generated swering (KB-QA) system faces two chal- MRs as queries. lenges: one is to transform natural lan- Unlike existing KB-QA systems which treat se- guage questions into their meaning repre- mantic parsing and answer retrieval as two cas- sentations (MRs); the other is to retrieve caded tasks, this paper presents a unified frame- answers from knowledge bases (KBs) us- work that can integrate semantic parsing into the ing generated MRs. Unlike previous meth- question answering procedure directly. Borrow- ods which treat them in a cascaded man- ing ideas from machine translation (MT), we treat ner, we present a translation-based ap- the QA task as a translation procedure. Like MT, proach to solve these two tasks in one u- CYK parsing is used to parse each input question, nified framework. We translate questions and answers of the span covered by each CYK cel- to answers based on CYK parsing. An- l are considered the translations of that cell; un- swers as translations of the span covered like MT, which uses offline-generated translation by each CYK cell are obtained by a ques- tables to translate source phrases into target trans- tion translation method, which first gener- lations, a semantic parsing-based question trans- ates formal triple queries as MRs for the lation method is used to translate each span into span based on question patterns and re- its answers on-the-fly, based on question patterns lation expressions, and then retrieves an- and relation expressions. The final answers can be swers from a given KB based on triple obtained from the root cell. Derivations generated queries generated. A linear model is de- during such a translation procedure are modeled fined over derivations, and minimum er- by a linear model, and minimum error rate train- ror rate training is used to tune feature ing (MERT) (Och, 2003) is used to tune feature weights based on a set of question-answer weights based on a set of question-answer pairs. pairs. Compared to a KB-QA system us- Figure 1 shows an example: the question direc- ing a state-of-the-art semantic parser, our tor of movie starred by Tom Hanks is translated to method achieves better results. one of its answers Robert Zemeckis by three main 1 Introduction steps: (i) translate director of to director of ; (ii) translate movie starred by Tom Hanks to one of it- Knowledge-based question answering (KB-QA) s answers Forrest Gump; (iii) translate director of computes answers to natural language (NL) ques- Forrest Gump to a final answer Robert Zemeckis. tions based on existing knowledge bases (KBs). Note that the updated question covered by Cell[0, Most previous systems tackle this task in a cas- 6] is obtained by combining the answers to ques- caded manner: First, the input question is trans- tion spans covered by Cell[0, 1] and Cell[2, 6]. formed into its meaning representation (MR) by The contributions of this work are two-fold: (1) an independent semantic parser (Zettlemoyer and We propose a translation-based KB-QA method Collins, 2005; Mooney, 2007; Artzi and Zettle- that integrates semantic parsing and QA in one moyer, 2011; Liang et al., 2011; Cai and Yates, unified framework. The benefit of our method This∗ work was finished while the author was visiting Mi- is that we don’t need to explicitly generate com- crosoft Research Asia. plete semantic structures for input questions. Be- Hanks, Film.Actor.Film, Forrest Gumpi6 and (iii) director of Forrest Gump ⟹ Robert Zemeckis 2 hForrest Gump, Film.Film.Director, Robert Cell[0, 6] (ii) movie starred by Tom Hanks ⟹ Forrest Gump 6 Zemeckisi0 are three ordered formal triples (i) director of ⟹ director of corresponding to the three translation steps in Cell[2, 6] Figure 1. We define the task of transforming question spans into formal triples as question Cell[0, 1] translation. A denotes one final answer of Q. th director of movie starred by Tom Hanks • hi(·) denotes the i feature function. • λ denotes the feature weight of h (·). Figure 1: Translation-based KB-QA example i i According to the above description, our KB- QA method can be decomposed into four tasks as: sides which, answers generated during the transla- (1) search space generation for H(Q); (2) ques- tion procedure help significantly with search space tion translation for transforming question spans in- pruning. (2) We propose a robust method to trans- to their corresponding formal triples; (3) feature form single-relation questions into formal triple design for h (·); and (4) feature weight tuning for queries as their MRs, which trades off between i fλ g. We present details of these four tasks in the transformation accuracy and recall using question i following subsections one-by-one. patterns and relation expressions respectively. 2 Translation-Based KB-QA 2.2 Search Space Generation We first present our translation-based KB-QA 2.1 Overview method in Algorithm 1, which is used to generate Formally, given a knowledge base KB and an N- H(Q) for each input NL question Q. L question Q, our KB-QA method generates a set of formal triples-answer pairs fhD; Aig as deriva- Algorithm 1: Translation-based KB-QA tions, which are scored and ranked by the distribu- 1 for l = 1 to jQj do tion P (hD; AijKB; Q) defined as follows: 2 for all i; j s.t. j − i = l do j 3 H(Qi ) = ;; j PM 4 T = QT rans(Q ; KB); expf i=1 λi · hi(hD; Ai; KB; Q)g i 5 foreach formal triple t 2 T do P PM 0 0 hD0 ;A0 i2H(Q) expf i=1 λi · hi(hD ; A i; KB; Q)g 6 create a new derivation d; 7 d:A = t:eobj ; 1 8 d:D = ftg; • KB denotes a knowledge base that stores a 9 update the model score of d; set of assertions. Each assertion t 2 KB is in j 10 insert d to H(Qi ); ID ID the form of fesbj; p; eobjg, where p denotes 11 end a predicate, eID and eID denote the subject 12 end sbj obj 13 end 2 and object entities of t, with unique IDs . 14 for l = 1 to jQj do 15 for all i; j s.t. j − i = l do •H(Q) denotes the search space fhD; Aig. D 16 for all m s.t. i ≤ m < j do m j 17 d 2 H(Q ) and d 2 H(Q ) is composed of a set of ordered formal triples for l i r m+1 do j 18 Qupdate = dl:A + dr:A; ft1; :::; tng. Each triple t = fesbj; p; eobjgi 2 19 T = QT rans(Qupdate; KB); D denotes an assertion in KB, where i and 20 foreach formal triple t 2 T do j denotes the beginning and end indexes of 21 create a new derivation d; 22 d:A = t:eobj ; S S the question span from which t is trans- 23 d:D = dl:D dr:D ftg; formed. The order of triples in D denotes 24 update the model score of d; j the order of translation steps from Q to A. 25 insert d to H(Qi ); E.g., hdirector of, Null, director of i1, hTom 26 end 0 27 end 1We use a large scale knowledge base in this paper, which 28 end contains 2.3B entities, 5.5K predicates, and 18B assertions. A 29 end 16-machine cluster is used to host and serve the whole data. 30 end 2 Each KB entity has a unique ID. For the sake of conve- 31 return H(Q). nience, we omit the ID information in the rest of the paper. The first half (from Line 1 to Line 13) gen- Algorithm 2: QP-based Question Translation erates a formal triple set T for each unary span 1 T = ;; j Qi 2 Q, using the question translation method 2 foreach entity mention eQ 2 Q do QT rans(Qj; KB) (Line 4), which takes Qj as the 3 Qpattern = replace eQ in Q with [Slot]; i i 4 foreach question pattern QP do input. Each triple t 2 T returned is in the form of 5 if Qpattern == QPpattern then j 6 E = Disambiguate(eQ; QPpredicate); fesbj; p; eobjg, where esbj’s mention occurs in Qi , 7 foreach e 2 E do p is a predicate that denotes the meaning expressed 8 create a new triple query q; j by the context of esbj in Qi , eobj is an answer of 9 q = fe; QPpredicate; ?g; j 10 fAig = AnswerRetrieve(q; KB); Q based on esbj, p and KB. We describe the im- i 11 foreach A 2 fAig do plementation detail of QT rans(·) in Section 2.3. 12 create a new formal triple t; The second half (from Line 14 to Line 31) first 13 t = fq:esbj ; q:p; Ag; j 14 t:score = 1:0; updates the content of each bigger span Qi by con- 15 insert t to T ; catenating the answers to its any two consecutive 16 end j 17 end smaller spans covered by Qi (Line 18).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-