Transfer Learning for Neural Semantic Parsing

Transfer Learning for Neural Semantic Parsing

Transfer Learning for Neural Semantic Parsing Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer Amazon.com fanxing, mathias, mddreyer @amazon.com, [email protected] { } Abstract with their own semantic formalisms and custom schemas for accessing information, where each The goal of semantic parsing is to map nat- formalism has various amount of annotation train- ural language to a machine interpretable ing data. meaning representation language (MRL). One of the constraints that limits full ex- Recent works have proven sequence-to- ploration of deep learning technologies for sequence to be an effective model architecture (Jia semantic parsing is the lack of sufficient and Liang, 2016; Dong and Lapata, 2016) for annotation training data. In this paper, semantic parsing. However, because of the limit we propose using sequence-to-sequence amount of annotated data, the advantage of neural in a multi-task setup for semantic pars- networks to capture complex data representation ing with a focus on transfer learning. We using deep structure (Johnson et al., 2016) has not explore three multi-task architectures for been fully explored. Acquiring data is expensive sequence-to-sequence modeling and com- and sometimes infeasible for task-oriented sys- pare their performance with an indepen- tems, the main reasons being multiple formalisms dently trained model. Our experiments (e.g., SPARQL for WikiData (Vrandeciˇ c´ and show that the multi-task setup aids transfer Krotzsch¨ , 2014), MQL for Freebase (Flanagan, learning from an auxiliary task with large 2008)), and multiple tasks (question answering, labeled data to a target task with smaller navigation interactions, transactional interac- labeled data. We see absolute accuracy tions). We propose to exploit these multiple gains ranging from 1.0% to 4.4% in our in- representations in a multi-task framework so house data set, and we also see good gains we can minimize the need for a large labeled ranging from 2.5% to 7.0% on the ATIS corpora across these formalisms. By suitably semantic parsing tasks with syntactic and modifying the learning process, we capture the semantic auxiliary tasks. common structures that are implicit across these formalisms and the tasks they are targeted for. 1 Introduction Conversational agents, such as Alexa, Siri and In this work, we focus on a sequence-to- Cortana, solve complex tasks by interacting and sequence based transfer learning for semantic mediating between the end-user and multiple parsing. In order to tackle the challenge of backend software applications and services. Natu- multiple formalisms, we apply three multi-task ral language is a simple interface used for com- frameworks with different levels of parameter munication between these agents. However, to sharing. Our hypothesis is that the encoder- make natural language machine-readable we need decoder paradigm learns a canonicalized represen- to map it to a representation that describes the se- tation across all tasks. Over a strong single-task mantics of the task expressed in the language. Se- sequence-to-sequence baseline, our proposed ap- mantic parsing is the process of mapping a natural- proach shows accuracy improvements across the language sentence into a formal machine-readable target formalism. In addition, we show that even representation of its meaning. This poses a chal- when the auxiliary task is syntactic parsing we can lenge in a multi-tenant system that has to inter- achieve good gains in semantic parsing that are act with multiple backend knowledge sources each comparable to the published state-of-the-art. 48 Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48–56, Vancouver, Canada, August 3, 2017. c 2017 Association for Computational Linguistics 2 Related Work where we have access to a large labeled resource in one task and want another semantic formalism There is a large body of work for semantic pars- with access to limited training data to benefit from ing. These approaches fall into three broad cat- a multi-task learning setup. Furthermore, we also egories – completely supervised learning based demonstrate that we can improve semantic parsing on fully annotated logical forms associated with tasks by using large data sources from an auxil- each sentence (Zelle and Mooney, 1996; Zettle- iary task such as syntactic parsing, thereby open- moyer and Collins, 2012) using question-answer ing up the opportunity for leveraging much larger pairs and conversation logs as supervision (Artzi datasets. Finally, we carefully compare multiple and Zettlemoyer, 2011; Liang et al., 2011; Be- multi-task architectures in our setup and show that rant et al., 2013) and distant supervision (Cai and increased sharing of both the encoder and decoder Yates, 2013; Reddy et al., 2014). All these ap- along with shared attention results in the best per- proaches make assumptions about the task, fea- formance. tures and the target semantic formalism. On the other hand, neural network based 3 Problem Formulation approaches, in particular the use of recurrent 3.1 Sequence-to-Sequence Formulation neural networks (RNNs) and encoder-decoder paradigms (Sutskever et al., 2014), have made fast Our semantic parser extends the basic encoder- progress on achieving state-of-the art performance decoder approach in Jia and Liang(2016). Given a on various NLP tasks (Vinyals et al., 2015; Dyer sequence of inputs x = x1, . , xm, the sequence- et al., 2015; Bahdanau et al., 2014). A key advan- to-sequence model will generate an output se- tage of RNNs in the encoder-decoder paradigm is quence of y = y1, . , yn. We encode the input that very few assumptions are made about the do- tokens x = x1, . , xm into a sequence of embed- main, language and the semantic formalism. This dings h = h1,..., hm implies they can generalize faster with little fea- hi = fencoder(Ex(xi), hi 1) (1) ture engineering. − Full semantic graphs can be expensive to an- First, an input embedding layer Ex maps each notate, and efforts to date have been fragmented word xi to a fixed-dimensional vector which is across different formalisms, leading to a limited then fed as input to the network f to obtain the amount of annotated data in any single formalism. hidden state representation hi. The embedding Using neural networks to train semantic parsers layer Ex could contain one single word embed- on limited data is quite challenging. Multi-task ding lookup table or a combination of word and learning aims at improving the generalization per- gazetteer embeddings, where we concatenate the formance of a task using related tasks (Caruana, output from each table. For the encoder and de- 1998; Ando and Zhang, 2005; Smith and Smith, coder, we use a stacked Gated Recurrent Units 2004). This opens the opportunity to utilize large (GRU) (Cho et al., 2014).1 The hidden states amounts of data for a related task to improve the are then converted to one fixed-length context vec- performance across all tasks. There has been re- tor per output index, cj = φj(h1, . , hm), where cent work in NLP demonstrating improved perfor- φj summarizes all input hidden states to form the mance for machine translation (Dong et al., 2015) context for a given output index j.2 and syntactic parsing (Luong et al., 2015). The decoder then uses these fixed-length vec- In this work, we attempt to merge various tors cj to create the target sequence through the strands of research using sequence-to-sequence following model. At each time step j in the output modeling for semantic parsing with focusing sequence, a state sj is calculated as on improving semantic formalisms with small amount of training data using a multi-task model sj = fdecoder(Ey(yj 1), sj 1, cj) (2) architecture. The closest work is Herzig and Be- − − rant(2017). Similar to this work, the authors use a neural semantic parsing model in a multi-task 1In order to speedup training, we use a right-to-left GRU instead of a bidirectional GRU. framework to jointly learn over multiple knowl- 2 def In a vanilla decoder, each φj (h1, . , hm) = hm, i.e, edge bases. Our work differs from their work in the hidden representation from the last state of the encoder is that we focus our attention on transfer learning, used as context for every output time step j. 49 Figure 1: An example of how the decoder output y3 is generated. Here, Ey maps any output symbol to a fixed- where υ, W 1 and W 2 are trainable parameters. dimensional vector. Finally, we compute the prob- In order to deal with the large vocabularies in ability of the output symbol yj given the history the output layer introduced by the long tail of en- y<j using Equation3. tities in typical semantic parsing tasks, we use a copy mechanism (Jia and Liang, 2016). At each p(y y , x) exp(O[s ; c ]) (3) time step j, the decoder chooses to either copy a j | <j ∝ j j where the matrix O projects the concatenation token from the encoder’s input stream or to write a token from the the decoder’s fixed output vocab- of sj and cj, denoted as [sj; cj], to the final out- put space. The matrix O are part of the train- ulary. We define two actions: able model parameters. We use an attention mech- 1.W RITE[y] for some y , where anism (Bahdanau et al., 2014) to summarize the ∈ Vdecoder decoder is the output vocabulary of the de- context vector cj, V coder. m cj = φj(h1, . , hm) = αji hi (4) 2.C OPY[i] for some i 1, . , m, which ∈ Xi=1 copies one symbol from the m input tokens. where j [1, . , n] is the step index for the ∈ We formulate a single softmax to select the ac- decoder output and αji is the attention weight, cal- culated using a softmax: tion to take, rewriting Equation3 as follows: exp(eji) p(a = RITE[y ] y , x) exp(O[s ; c ]) α = (5) j W j <j j j ji m exp(e ) | ∝ i0=1 ji0 (8) where e is the relevance score of each context p(aj = COPY[i] y<j, x) exp(eji) (9) ji P | ∝ vector cj, modeled as: The decoder is now a softmax over the actions eji = g(hi, sj) (6) aj; Figure1 shows how the decoder’s output y at y In this paper, the function g is defined as fol- the third time step 3 is generated.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us