Semantic Parsing for Single-Relation Question Answering
Total Page:16
File Type:pdf, Size:1020Kb
Semantic Parsing for Single-Relation Question Answering Wen-tau Yih Xiaodong He Christopher Meek Microsoft Research Redmond, WA 98052, USA scottyih,xiaohe,meek @microsoft.com { } Abstract the same question. That is to say that the problem of mapping from a question to a particular relation We develop a semantic parsing framework and entity in the KB is non-trivial. based on semantic similarity for open do- In this paper, we propose a semantic parsing main question answering (QA). We focus framework tailored to single-relation questions. on single-relation questions and decom- At the core of our approach is a novel semantic pose each question into an entity men- similarity model using convolutional neural net- tion and a relation pattern. Using convo- works. Leveraging the question paraphrase data lutional neural network models, we mea- mined from the WikiAnswers corpus by Fader et sure the similarity of entity mentions with al. (2013), we train two semantic similarity mod- entities in the knowledge base (KB) and els: one links a mention from the question to an the similarity of relation patterns and re- entity in the KB and the other maps a relation pat- lations in the KB. We score relational tern to a relation. The answer to the question can triples in the KB using these measures thus be derived by finding the relation–entity triple and select the top scoring relational triple r(e1, e2) in the KB and returning the entity not to answer the question. When evaluated mentioned in the question. By using a general se- on an open-domain QA task, our method mantic similarity model to match patterns and re- achieves higher precision across different lations, as well as mentions and entities, our sys- recall points compared to the previous ap- tem outperforms the existing rule learning system, proach, and can improve F1 by 7 points. PARALEX (Fader et al., 2013), with higher pre- cision at all the recall points when answering the 1 Introduction questions in the same test set. The highest achiev- Open-domain question answering (QA) is an im- able F1 score of our system is 0.61, versus 0.54 of portant and yet challenging problem that remains PARALEX. largely unsolved. In this paper, we focus on an- The rest of the paper is structured as follows. swering single-relation factual questions, which We first survey related work in Sec. 2, followed by are the most common type of question observed in the problem definition and the high-level descrip- various community QA sites (Fader et al., 2013), tion of our approach in Sec. 3. Sec. 4 details our as well as in search query logs. We assumed semantic models and Sec. 5 shows the experimen- such questions are answerable by issuing a single- tal results. Finally, Sec. 6 concludes the paper. relation query that consists of the relation and an 2 Related Work argument entity, against a knowledge base (KB). Example questions of this type include: “Who is Semantic parsing of questions, which maps nat- the CEO of Tesla?” and “Who founded Paypal?” ural language questions to database queries, is While single-relation questions are easier to a critical component for KB-supported QA. An handle than questions with more complex and early example of this research is the semantic multiple relations, such as “When was the child of parser for answering geography-related questions, the former Secretary of State in Obama’s admin- learned using inductive logic programming (Zelle istration born?”, single-relation questions are still and Mooney, 1996). Research in this line origi- far from completely solved. Even in this restricted nally used small, domain-specific databases, such domain there are a large number of paraphrases of as GeoQuery (Tang and Mooney, 2001; Liang et 643 Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 643–648, Baltimore, Maryland, USA, June 23-25 2014. c 2014 Association for Computational Linguistics al., 2013). Very recently, researchers have started developing semantic parsers for large, general- Q RP M (1) → ∧ domain knowledge bases like Freebase and DB- RP when were X invented (2) pedia (Cai and Yates, 2013; Berant et al., 2013; → M dvd players (3) Kwiatkowski et al., 2013). Despite significant → progress, the problem remains challenging. Most when were X invented methods have not yet been scaled to large KBs be-invent-in (4) → that can support general open-domain QA. In con- dvd players trast, Fader et al. (2013) proposed the PARALEX dvd-player (5) system, which targets answering single-relation → questions using an automatically created knowl- edge base, ReVerb (Fader et al., 2011). By Figure 1: A potential semantic parse of the ques- applying simple seed templates to the KB and tion “When were DVD players invented?” by leveraging community-authored paraphrases of questions from WikiAnswers, they successfully A knowledge base in this work can be simply demonstrated a high-quality lexicon of pattern- viewed as a collection of binary relation instances matching rules can be learned for this restricted in the form of r(e , e ), where r is the relation and form of semantic parsing. 1 2 e and e are the first and second entity arguments. The other line of work related to our approach 1 2 is continuous representations for semantic simi- Single-relation questions are perhaps the easiest larity, which has a long history and is still an form of questions that can directly be answered active research topic. In information retrieval, by a knowledge base. If the mapping of the re- TF-IDF vectors (Salton and McGill, 1983), latent lation and entity in the question can be correctly semantic analysis (Deerwester et al., 1990) and resolved, then the answer can be derived by a sim- topic models (Blei et al., 2003) take the bag-of- ple table lookup, assuming that the fact exists in words approach, which captures well the contex- the KB. However, due to the large number of para- tual information for documents, but is often too phrases of the same question, identifying the map- coarse-grained to be effective for sentences. In ping accurately remains a difficult problem. a separate line of research, deep learning based Our approach in this work can be viewed as a techniques have been proposed for semantic un- simple semantic parser tailored to single-relation derstanding (Mesnil et al., 2013; Huang et al., questions, powered by advanced semantic similar- 2013; Shen et al., 2014b; Salakhutdinov and Hin- ity models to handle the paraphrase issue. Given a ton, 2009; Tur et al., 2012). We adapt the work question, we first separate it into two disjoint parts: of (Huang et al., 2013; Shen et al., 2014b) for mea- the entity mention and the relation pattern. The suring the semantic distance between a question entity mention is a subsequence of consecutive and relational triples in the KB as the core compo- words in the question, where the relation pattern nent of our semantic parsing approach. is the question where the mention is substituted by a special symbol. The mapping between the 3 Problem Definition & Approach pattern and the relation in the KB, as well as the mapping between the mention and the entity are In this paper, we focus on using a knowledge determined by corresponding semantic similarity base to answer single-relation questions. A single- models. The high-level approach can be viewed relation question is defined as a question com- as a very simple context-free grammar, which is posed of an entity mention and a binary rela- shown in Figure 1. tion description, where the answer to this ques- tion would be an entity that has the relation with The probability of the rule in (1) is 1 since the given entity. An example of a single-relation we assume the input is a single-relation ques- question is “When were DVD players invented?” tion. For the exact decomposition of the ques- The entity is dvd-player and the relation is tion (e.g., (2), (3)), we simply enumerate all com- be-invent-in. The answer can thus be de- binations and assign equal probabilities to them. scribed as the following lambda expression: The performance of this approach depends mainly on whether the relation pattern and entity mention λx. be-invent-in(dvd-player, x) can be resolved correctly (e.g., (4), (5)). To deter- 644 Semantic layer: y 500 In our model, we leverage the word hash- Semantic projection matrix: Ws ing technique proposed in (Huang et al., 2013) Max pooling layer: v 500 where we first represent a word by a letter- ... trigram count vector. For example, given a word (e.g., cat), after adding word boundary sym- Max pooling operation max max ... max ... ... ... bols (e.g., #cat#), the word is segmented into a se- quence of letter-n-grams (e.g., letter-trigrams: #- Convolutional layer: ht 500 500 ... 500 c-a, c-a-t, a-t-#). Then, the word is represented Convolution matrix: Wc as a count vector of letter-trigrams. For exam- ple, the letter-trigram representation of “cat” is: Word hashing layer: ft 15K 15K 15K ... 15K 15K Word hashing matrix: Wf Word sequence: xt <s> w1 w2 wT <s> Figure 2: The CNNSM maps a variable-length word sequence to a low-dimensional vector in a latent semantic space. A word contextual window In Figure 2, the word hashing matrix W de- size (i.e., the receptive field) of three is used in the f notes the transformation from a word to its letter- illustration. Convolution over word sequence via trigram count vector, which requires no learning.