Systematicity in a Recurrent Neural Network by Factorizing Syntax And

Systematicity in a Recurrent Neural Network by Factorizing Syntax And

Systematicity in a Recurrent Neural Network by Factorizing Syntax and Semantics Jacob Russin ([email protected]) Jason Jo Department of Psychology MILA UC Davis Universite´ de Montreal´ Randall C. O’Reilly Yoshua Bengio Department of Psychology and Computer Science MILA Center for Neuroscience Universite´ de Montreal´ UC Davis CIFAR Senior Fellow Abstract way (e.g., if I tell you that “dax” is a verb, you can general- ize its usage to all kinds of constructions, like “dax twice and Standard methods in deep learning fail to capture composi- tional or systematic structure in their training data, as shown then dax again”, without even knowing what the word means) by their inability to generalize outside of the training distribu- (Lake & Baroni, 2018; Lake et al., 2019). However, powerful tion. However, human learners readily generalize in this way, recurrent seq2seq models perform surprisingly poorly on this e.g. by applying known grammatical rules to novel words. The inductive biases that might underlie this powerful cognitive ca- task (Lake & Baroni, 2018; Bastings et al., 2018). pacity remain unclear. Inspired by work in cognitive science From a statistical-learning perspective, this failure is quite suggesting a functional distinction between systems for syn- natural. The neural networks trained on the SCAN task fail tactic and semantic processing, we implement a modification to an existing deep learning architecture, imposing an analo- to generalize because they have memorized biases that do gous separation. The resulting architecture substantially out- indeed exist in the training set. Because “jump” has never performs standard recurrent networks on the SCAN dataset, a been seen with any adverb, it would not be irrational for a compositional generalization task, without any additional su- pervision. Our work suggests that separating syntactic from learner to assume that “jump twice” is an invalid sentence in semantic learning may be a useful heuristic for capturing com- this language. The SCAN task requires networks to make an positional structure, and highlights the potential of using cog- inferential leap about the entire structure of part of the dis- nitive principles to inform inductive biases in deep learning. tribution that they have not seen — that is, it requires them Keywords: compositional generalization; systematicity; deep to make an out-of-domain (o.o.d.) extrapolation (Marcus, learning; inductive bias; SCAN dataset 2018), rather than merely interpolate according to the as- sumption that train and test data are independent and iden- Introduction tically distributed (i.i.d.) (see left part of Figure 3). Seen an- A crucial property underlying the power of human cognition other way, the SCAN task and its analogues in human learn- is its systematicity (Lake, Ullman, Tenenbaum, & Gershman, ing (e.g., “dax”), require models not to learn some of the cor- 2017; Fodor & Pylyshyn, 1988): known concepts can be relations that are actually present in the training data (Kriete, combined in novel ways according to systematic rules, allow- Noelle, Cohen, & O’Reilly, 2013). To the extent that humans ing the number of expressible combinations to grow exponen- can perform well on certain kinds of o.o.d. tests, they must tially in the number of concepts that are learned. Recent work be utilizing inductive biases that are lacking in current deep has shown that standard algorithms in deep learning fail to learning models (Battaglia et al., 2018). capture this important property: when tested on unseen com- It has long been suggested that the human capacity for sys- binations of known elements, standard models fail to gener- tematic generalization is linked to mechanisms for process- alize (Lake & Baroni, 2018; Loula, Baroni, & Lake, 2018; ing syntax, and their functional separation from the mean- Bastings, Baroni, Weston, Cho, & Kiela, 2018). It has been ings of individual words (Chomsky, 1957; Fodor & Pylyshyn, suggested that this failure represents a major deficiency of 1988). Furthermore, recent work in cognitive and computa- current deep learning models, especially when they are com- tional neuroscience suggests that human learners may factor- pared to human learners (Marcus, 2018; Lake et al., 2017; ize knowledge about structure and content, and that this may Lake, Linzen, & Baroni, 2019). be important for their ability to generalize to novel combina- A recently published dataset called SCAN (Lake & Ba- tions (Behrens et al., 2018; Ranganath & Ritchey, 2012). In roni, 2018) tests compositional generalization in a sequence- this work, we take inspiration from these ideas and explore to-sequence (seq2seq) setting by systematically holding out operationalizing a separation between structure and content of the training set all inputs containing a basic primitive verb as an inductive bias within a deep-learning attention mech- (“jump”), and testing on sequences containing that verb. Suc- anism (Bahdanau, Cho, & Bengio, 2015). The resulting ar- cess on this difficult problem requires models to generalize chitecture, which we call Syntactic Attention, separates struc- knowledge gained about the other primitive verbs (“walk”, tural learning about the alignment of input words to target “run” and “look”) to the novel verb “jump,” without having actions (which can be seen as a rough analogue of syntax seen “jump” in any but the most basic context (“jump” ! in the seq2seq setting) from learning about the meanings of JUMP). It is trivial for human learners to generalize in this individual words (in terms of their corresponding actions). 109 ©2020 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY). Factorizing Syntax and Semantics in Seq2seq In the seq2seq setting, models must learn a mapping from arbitrary-length sequences of inputs x = fx1;x2;:::;xTx g to arbitrary-length sequences of outputs y = fy1;y2;:::;yTy g: p(yjx). In the SCAN task, the inputs are a sequence of in- structions, and the outputs are a sequence of actions. The attention mechanism of Bahdanau et al. (2015) models the conditional probability of each target action given the input Figure 1: Syntactic Attention architecture. Syntactic and se- sequence and previous targets: p(yijy1;y2;:::;yi−1;x). This mantic information are maintained in separate streams. The is accomplished by processing the instructions with a recur- semantic stream is used to directly produce actions, and pro- rent neural network (RNN) in an encoder. The outputs of this cesses words with a simple linear transformation, so that se- RNN are used both for encoding individual words for sub- quential information is not maintained. The syntactic stream sequent translation, and for determining their alignment to processes inputs with a recurrent neural network, allowing actions during decoding. it to capture temporal dependencies between words. This The underlying assumption made by the Syntactic Atten- stream determines the attention over semantic representations tion architecture is that the dependence of target actions on at each time step during decoding. the input sequence can be separated into two independent fac- tors. One factor, p(yijx j), which we refer to as “semantics,” models the conditional distribution from individual words in The modified attention mechanism achieves substantially im- the input to individual actions in the target. Note that, un- proved compositional generalization performance over stan- like in the model of Bahdanau et al. (2015), these x j do not dard recurrent networks on the SCAN task. contain any information about the other words in the input se- An important contribution of this work is in showing how quence because they are not processed with an RNN. They are changes in the connectivity of a network can shape its learn- “semantic” in the sense that they contain the information rel- ing in order to develop a separation between structure and evant to translating the instruction words into corresponding content, without any direct manual imposition of this sepa- actions. The other factor, p( j ! ijx;y1:i−1), which we refer ration per se. These changes act as an inductive bias or soft to as “syntax,” models the conditional probability that word j constraint that only manifests itself through learning. Fur- in the input is relevant to word i in the action sequence, given thermore, our model shows that attentional modulation can the entire set of instructions. This is the alignment of words provide a mechanism for structural representations to con- in the instructions to particular steps in the action sequence, trol processing in the content pathway, similar to how spatial and is accomplished by computing the attention weights over attention in the dorsal visual pathway can generically modu- the instruction words at each step in the action sequence us- late object-recognition processing in the ventral visual stream ing encodings from an RNN. This factor is “syntactic” in the (O’Reilly, Wyatte, & Rohrlich, 2017). Thus, attentional mod- sense that it must capture all of the temporal dependencies in ulation may be critical for enabling structure-sensitive pro- the instructions that are relevant to determining the serial or- cessing — a natural property of symbolic models — to be re- der of outputs (e.g., what should be done “twice”, etc.). The alized in neural hardware. This provides a more purely neural crucial architectural assumption, then, is that any temporal framework for achieving systematicity, compared to hybrid

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us