A Deep Architecture for Semantic Parsing

A Deep Architecture for Semantic Parsing

A Deep Architecture for Semantic Parsing Edward Grefenstette, Phil Blunsom, Nando de Freitas and Karl Moritz Hermann Department of Computer Science University of Oxford, UK edwgre, pblunsom, nando, karher @cs.ox.ac.uk { } Abstract work in sentiment analysis (Socher et al., 2012; Hermann and Blunsom, 2013), document classifi- Many successful approaches to semantic cation (Yih et al., 2011; Lauly et al., 2014; Her- parsing build on top of the syntactic anal- mann and Blunsom, 2014a), frame-semantic pars- ysis of text, and make use of distribu- ing (Hermann et al., 2014), and machine trans- tional representations or statistical mod- lation (Mikolov et al., 2010; Kalchbrenner and els to match parses to ontology-specific Blunsom, 2013a), inter alia, combining two em- queries. This paper presents a novel deep pirically successful deep learning models to form learning architecture which provides a se- a new architecture for semantic parsing. mantic parsing system through the union The structure of this short paper is as follows. of two neural models of language se- We first provide a brief overview of the back- mantics. It allows for the generation of ground literature this model builds on in 2. In 3, § § ontology-specific queries from natural lan- we begin by introducing two deep learning models guage statements and questions without with different aims, namely the joint learning of the need for parsing, which makes it es- embeddings in parallel corpora, and the generation pecially suitable to grammatically mal- of strings of a language conditioned on a latent formed or syntactically atypical text, such variable, respectively. We then discuss how both as tweets, as well as permitting the devel- models can be combined and jointly trained to opment of semantic parsers for resource- form a deep learning model supporting the gener- poor languages. ation of knowledgebase queries from natural lan- guage questions. Finally, in 4 we conclude by § 1 Introduction discussing planned experiments and the data re- The ubiquity of always-online computers in the quirements to effectively train this model. form of smartphones, tablets, and notebooks has 2 Background boosted the demand for effective question answer- ing systems. This is exemplified by the grow- Semantic parsing describes a task within the larger ing popularity of products like Apple’s Siri or field of natural language understanding. Within Google’s Google Now services. In turn, this cre- computational linguistics, semantic parsing is typ- ates the need for increasingly sophisticated meth- ically understood to be the task of mapping nat- ods for semantic parsing. Recent work (Artzi and ural language sentences to formal representations Zettlemoyer, 2013; Kwiatkowski et al., 2013; Ma- of their underlying meaning. This semantic rep- tuszek et al., 2012; Liang et al., 2011, inter alia) resentation varies significantly depending on the has answered this call by progressively moving task context. For instance, semantic parsing has away from strictly rule-based semantic parsing, to- been applied to interpreting movement instruc- wards the use of distributed representations in con- tions (Artzi and Zettlemoyer, 2013) or robot con- junction with traditional grammatically-motivated trol (Matuszek et al., 2012), where the underlying re-write rules. This paper seeks to extend this line representation would consist of actions. of thinking to its logical conclusion, by provid- Within the context of question answering—the ing the first (to our knowledge) entirely distributed focus of this paper—semantic parsing typically neural semantic generative parsing model. It does aims to map natural language to database queries so by adapting deep learning methods from related that would answer a given question. Kwiatkowski 22 Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 22–27, Baltimore, Maryland USA, June 26 2014. c 2014 Association for Computational Linguistics et al. (2013) approach this problem using a multi- aims to minimise the distance between these com- step model. First, they use a CCG-like parser posed representations: to convert natural language into an underspecified E (a, b) = g(a) h(b) 2 logical form (ULF). Second, the ULF is converted bi k − k into a specified form (here a FreeBase query), In order to avoid strong alignment between dis- which can be used to lookup the answer to the similar cross-lingual sentence pairs, this error given natural language question. is combined with a noise-contrastive hinge loss, where n is a randomly sampled sentence, ∈ LB 3 Model Description dissimilar to the parallel pair a, b , and m de- { } We describe a semantic-parsing model that learns notes some margin: to derive quasi-logical database queries from nat- Ehl(a, b, n) = [m + Ebi(a, b) Ebi(a, n)]+ , ural language. The model follows the structure of − Kwiatkowski et al. (2013), but relies on a series of where [x]+ = max(0, x). The resulting objective neural networks and distributed representations in function is as follows lieu of the CCG and λ-Calculus based representa- k λ tions used in that paper. J(θ) = E (a, b, n ) + θ 2 , hl i 2 k k The model described here borrows heavily from (a,b) i=1 ! X∈C X two approaches in the deep learning literature. with λ θ 2 as the L regularization term and First, a noise-contrastive neural network similar to 2 k k 2 θ= g, h, , as the set of model variables. that of Hermann and Blunsom (2014a, 2014b) is { DA DB} used to learn a joint latent representation for nat- ural language and database queries ( 3.1). Sec- § ond, we employ a structured conditional neural L2 word embeddings language model in 3.2 to generate queries given § such latent representations. Below we provide the necessary background on these two components, before introducing the combined model and de- h scribing its learning setup. 3.1 Bilingual Compositional Sentence Models L2 sentence embedding The bilingual compositional sentence model contrastive estimation (BiCVM) of Hermann and Blunsom (2014a) pro- vides a state-of-the-art method for learning se- L1 sentence embedding mantically informative distributed representations for sentences of language pairs from parallel cor- g pora. Through the joint production of a shared la- tent representation for semantically aligned sen- tence pairs, it optimises sentence embeddings so that the respective representations of dissim- L1 word embeddings ilar cross-lingual sentence pairs will be weakly ... aligned, while those of similar sentence pairs will be strongly aligned. Both the ability to jointly learn sentence embeddings, and to produce latent Figure 1: Diagrammatic representation of a shared representations, will be relevant to our se- BiCVM. mantic parsing pipeline. The BiCVM model shown in Fig. 1 assumes While Hermann and Blunsom (2014a) applied vector composition functions g and h, which map this model only to parallel corpora of sentences, an ordered set of vectors (here, word embed- it is important to note that the model is agnostic n dings from A, B) onto a single vector in R . concerning the inputs of functions g and h. In this D D As stated above, for semantically equivalent sen- paper we will discuss how this model can be ap- tences a, b across languages , , the model plied to non-sentential inputs. LA LB 23 3.2 Conditional Neural Language Models Neural language models (Bengio et al., 2006) pro- vide a distributed alternative to n-gram language models, permitting the joint learning of a pre- diction function for the next word in a sequence given the distributed representations of a subset of the last n 1 words alongside the representa- − tions themselves. Recent work in dialogue act la- wn-3 wn-2 wn-1 wn belling (Kalchbrenner and Blunsom, 2013b) and in machine translation (Kalchbrenner and Blun- som, 2013a) has demonstrated that a particular β kind of neural language model based on recurrent neural networks (Mikolov et al., 2010; Sutskever et al., 2011) could be extended so that the next Figure 2: Diagrammatic representation of a Con- word in a sequence is jointly generated by the ditional Neural Language Model. word history and the distributed representation for a conditioning element, such as the dialogue class in Fig. 2, let us suppose that we wish to jointly of a previous sentence, or the vector representation condition the next word on its history and some of a source sentence. In this section, we briefly de- variable β, for which an embedding r has been scribe a general formulation of conditional neural β obtained through a previous step, in order to com- language models, based on the log-bilinear mod- pute p(wn w1:n 1, β). The simplest way to do this els of Mnih and Hinton (2007) due to their relative | − additively, which allows us to treat the contribu- simplicity. tion of the embedding for β as similar to that of an A log-bilinear language model is a neural net- extra word in the history. We define a new energy work modelling a probability distribution over the function: next word in a sequence given the previous n 1, − i.e. p(wn w1:n 1). Let V be the size of our vo- | − | | E(wn; w1:n 1, β) = cabulary, and R be a V d vocabulary matrix − | | × n 1 R − T T T where the wi demnotes the row containing the R C + r C R b R b d − wi i β β wn− R wn− wn word embedding in R of a word wi, with d be- i=1 ! ! ing a hyper-parameter indicating embedding size. X d d Let Ci be the context transform matrix in R × to obtain the probability which modifies the representation of the ith word E(wn;w1:n 1,β) e− − in the word history.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us