Convolutional Neural Tensor Network Architecture for Community-Based Question Answering

Convolutional Neural Tensor Network Architecture for Community-Based Question Answering

Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Convolutional Neural Tensor Network Architecture for Community-based Question Answering Xipeng Qiu and Xuanjing Huang Shanghai Key Laboratory of Data Science, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China [email protected], [email protected] Abstract Query: Q: Why is my laptop screen blinking? Retrieving similar questions is very important in Expected: community-based question answering. A major Q1: How to troubleshoot a flashing screen on an challenge is the lexical gap in sentence match- LCD monitor? ing. In this paper, we propose a convolutional Not Expected: neural tensor network architecture to encode the Q2: How to make text blink on screen with Power- sentences in semantic space and model their in- Point? teractions with a tensor layer. Our model inte- grates sentence modeling and semantic matching Table 1: An example on question retrieval into a single model, which can not only capture the useful information with convolutional and pool- ing layers, but also learn the matching metrics be- The state-of-the-art studies [Blooma and Kurian, 2011] tween the question and its answer. Besides, our mainly focus on finding textual clues to improve the similarity model is a general architecture, with no need for function, such as translation-based [Xue et al., 2008; Zhou et the other knowledge such as lexical or syntac- al., 2011] or syntactic-based approaches [Wang et al., 2009; tic analysis. The experimental results shows that Carmel et al., 2014]. However, the improvements of these ap- our method outperforms the other methods on two proaches are limited. The reason is that it is difficult to design matching tasks. a good similarity function for the discrete representations of words. Recently, various methods are proposed to learn dis- 1 Introduction tributed representations of words (word embeddings) in a Community-based (or collaborative) question answering low-dimensional vector space. Distributed representations (CQA) such as Yahoo! Answers1 and Baidu Zhidao2 has be- help learning algorithms to achieve better performance by come a popular online service in recent years. Unlike tradi- grouping similar words, and have been extensively applied tional question answering (QA), information seekers can post on many natural language processing (NLP) tasks [Turian et their questions on a CQA website which are later answered al., 2010; Mikolov et al., 2010; Collobert et al., 2011]. by the other users. However, with the increase of the CQA In this paper, we propose a novel unified model for CQA, archive, it accumulates massive duplicated questions on the convolutional neural tensor network (CNTN), which inte- CQA websites. One of the primary reasons is that informa- grates the sentence modeling and semantic matching into a tion seekers cannot retrieve answers they need and thus post single model. Specifically, we first transform all the word another new question consequently. Therefore, it becomes tokens into vectors by a lookup layer, then encode the ques- more and more important to find semantically similar ques- tions and answers to fixed-length vectors with convolutional tions. and pooling layers, and finally model their interactions with The major challenge for CQA retrieval is the problem of a tensor layer. Thus, our model can group similar questions lexical gap (or lexical chasm) among the questions [Jeon et and answers in a semantic vector space and avoid the prob- al., 2005; Xue et al., 2008]. Since question-answer (QA) lem of lexical gap. The topmost tensor layer can be regarded pairs are relatively short, the word mismatching problem is as a kind of metric learning methods [Xing et al., 2002] to especially important, as shown in Table 1. measure the relevance of two texts, and learn a better metric than the traditional similarity metrics, such as inner-product 1http://answers.yahoo.com/ or Euclidean distance. 2http://zhidao.baidu.com/ The contributions of this paper can be summarized as fol- 1305 lows. fixed length vector 1. Our proposed CNTN architecture integrates the sen- tence modeling and semantic matching into a unified model, which can not only capture the useful semantic and structure information in convolutional and pooling layers, but also learn the matching metrics between texts in the topmost tensor layer. 2. CNTN is a general architecture and need not the com- more convolution plicated NLP pre-processing (such as syntactic analysis) sentence convolution pooling and pooling or prior knowledge (such as WordNet). Figure 1: Sentence modelling with convolutional neural net- 3. We perform extensive empirical studies on two match- work. ing tasks, and demonstrate that CNTN is more effective than the other models. latest words that it takes as input [Mikolov et al., 2010]. This 2 Related Works gives the RNN excellent performance at language modelling, 2.1 Questions Retrieval but it is suboptimal for modeling the whole sentence. [Le and In CQA, various techniques have been studied to solve lexical Mikolov, 2014] proposed Paragraph Vector (PV) to learn con- gap problems for question retrieval. The early works can be tinuous distributed vector representations for pieces of texts, traced back to finding similar questions in Frequently Asked which can be regarded as a long term memory of sentence as Questions (FAQ) archives, such as the FAQ finder [Burke et opposed to short memory in RNN. al., 1997], which usually used statistical and semantic simi- Recursive neural network (RecNN) adopts a more general larity measures to rank FAQs. structure to encode sentence [Pollack, 1990; Socher et al., Jeon et al.[2005] showed that the translation model out- 2013b]. At every node in the tree the contexts at the left and performs the others. In subsequent works, some translation- right children of the node are combined by a classical layer. based methods [Xue et al., 2008; Zhou et al., 2011] were pro- The weights of the layer are shared across all nodes in the posed to more sophisticatedly combine the translation model tree. The layer computed at the top node gives a representa- and the language model for question retrieval. Although tion for the sentence. However, RecNN depends on external these methods has yielded the state-of-the-art performance constituency parse trees provided by an external parse tree. for question retrieval, they model the word translation prob- Convolutional neural network (CNN) is also used to model abilities without taking into account the structure of whole sentences [Kalchbrenner et al., 2014; Hu et al., 2014]. As sentences. illustrated in Figure 1, it takes as input the embedding of Another kind of methods [Wang et al., 2009; Carmel et al., words in the sentence aligned sequentially, and summarizes 2014] utilized the question structures to improve the similar- the meaning of a sentence through layers of convolution and ity in question matching. However, these methods depend on pooling, until reaching a fixed length vectorial representation a external parser to get the grammar tree of a sentence. in the final layer. CNN has some advantages: (1) it can main- tain the word order information which is crucial to the short 2.2 Neural Sentence Model sentences; (2) Nonlinear activation in the convolutional neu- With the recent development of deep learning, most methods ral networks can learn more abstract characteristics. [Bengio et al., 2003; Mikolov et al., 2010; Collobert et al., 2011] are primarily focus on learning the distributed word 3 Modeling Question and Answers with representations (also called word embeddings). Convolutional Neural Network Beyond words, there are some other methods to model the sentence, called neural sentence models. The primary In this paper, we use CNN to encode the sentence. The origi- role of the neural sentence model is to represent the variable- nal CNN can learn sequence embeddings in a supervised way. length sentence as a fixed-length vector. These models gen- In our model, the parameters in CNN are learnt jointly with erally consist of a projection layer that maps words, sub- our final objective function instead of separate training. Given an input sentence s, we take the embeddings wi 2 word units or n-grams to high dimensional embeddings (of- n ten trained beforehand with unsupervised methods); the latter R w of each word w in s to obtain the first layer of the CNN. are then combined with the different architectures of neural networks, such as Neural Bag-of-Words (NBOW), recurrent Convolution The embeddings for all words in the sentence nw×ls neural network (RNN), recursive neural network(RecNN), s construct the input matrix s 2 R , where ls denotes the convolutional neural network (CNN) and so on. length of s. A convolutional layer is obtained by convolving A simple and intuitive method is the Neural Bag-of-Words a matrix of weights (filter) m 2 Rn×m with the matrix of (NBOW) model. However, a main drawback of NBOW is activations at the layer below, where m is the filter width. that word order is lost. Although NBOW is effective for gen- For example, the first layer is obtained by applying a con- eral document classification, it is not suitable for short sen- volutional filter to the sentence matrix s in the input layer. tences. A sentence model based on a recurrent neural net- Dimension nw and filter width m are hyper-parameters of the work is sensitive to word order, but it has a bias towards the network. 1306 question f + + answer Matching Score Figure 2: Architecture of Neural Tensor Network. k-Max Pooling Given a value k and a row vector p 2 Rp, ways cannot sufficiently take into account the complicated k k-max pooling selects the subsequence pmax of the k highest interactions. k values of p.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us