Meta-Learning with Memory-Augmented Neural Networks

Meta-Learning with Memory-Augmented Neural Networks

Meta-Learning with Memory-Augmented Neural Networks Adam Santoro [email protected] Google DeepMind Sergey Bartunov [email protected] Google DeepMind, National Research University Higher School of Economics (HSE) Matthew Botvinick [email protected] Daan Wierstra [email protected] Timothy Lillicrap [email protected] Google DeepMind Abstract nition (Yu & Deng, 2012), and games (Mnih et al., 2015; Silver et al., 2016). Notably, performance in such tasks is Despite recent breakthroughs in the applications typically evaluated after extensive, incremental training on of deep neural networks, one setting that presents large data sets. In contrast, many problems of interest re- a persistent challenge is that of “one-shot learn- quire rapid inference from small quantities of data. In the ing.” Traditional gradient-based networks require limit of “one-shot learning,” single observations should re- a lot of data to learn, often through extensive it- sult in abrupt shifts in behavior. erative training. When new data is encountered, the models must inefficiently relearn their param- This kind offlexible adaptation is a celebrated aspect of hu- eters to adequately incorporate the new informa- man learning (Jankowski et al., 2011), manifesting in set- tion without catastrophic interference. Architec- tings ranging from motor control (Braun et al., 2009) to the tures with augmented memory capacities, such as acquisition of abstract concepts (Lake et al., 2015). Gener- Neural Turing Machines (NTMs), offer the abil- ating novel behavior based on inference from a few scraps ity to quickly encode and retrieve new informa- of information – e.g., inferring the full range of applicabil- tion, and hence can potentially obviate the down- ity for a new word, heard in only one or two contexts – is sides of conventional models. Here, we demon- something that has remained stubbornly beyond the reach strate the ability of a memory-augmented neu- of contemporary machine intelligence. It appears to present ral network to rapidly assimilate new data, and a particularly daunting challenge for deep learning. In sit- leverage this data to make accurate predictions uations when only a few training examples are presented after only a few samples. We also introduce a one-by-one, a straightforward gradient-based solution is to new method for accessing an external memory completely re-learn the parameters from the data available that focuses on memory content, unlike previous at the moment. Such a strategy is prone to poor learning, methods that additionally use memory location- and/or catastrophic interference. In view of these hazards, based focusing mechanisms. non-parametric methods are often considered to be better suited. However, previous work does suggest one potential strat- 1. Introduction egy for attaining rapid learning from sparse data, and The current success of deep learning hinges on the abil- hinges on the notion of meta-learning( Thrun, 1998; Vi- ity to apply gradient-based optimization to high-capacity lalta & Drissi, 2002). Although the term has been used models. This approach has achieved impressive results on in numerous senses (Schmidhuber et al., 1997; Caruana, many large-scale supervised tasks with raw sensory input, 1997; Schweighofer & Doya, 2003; Brazdil et al., 2003), such as image classification (He et al., 2015), speech recog- meta-learning generally refers to a scenario in which an agent learns at two levels, each associated with different Proceedings of the 33 rd International Conference on Machine time scales. Rapid learning occurs within a task, for ex- Learning, New York, NY, USA, 2016. JMLR: W&CP volume ample, when learning to accurately classify within a par- 48. Copyright 2016 by the author(s). ticular dataset. This learning is guided by knowledge Meta-Learning with Memory-Augmented Neural Networks accrued more gradually across tasks, which captures the learning, extending the range of problems to which deep way in which task structure varies across target domains learning can be effectively applied. (Giraud-Carrier et al., 2004; Rendell et al., 1987; Thrun, 1998). Given its two-tiered organization, this form of meta- 2. Meta-Learning Task Methodology learning is often described as “learning to learn.” Usually, we try to choose parametersθ to minimize a learn- It has been proposed that neural networks with mem- ing cost across some datasetD. However, for meta- ory capacities could prove quite capable of meta-learning L learning, we choose parameters to reduce the expected (Hochreiter et al., 2001). These networks shift their bias learning cost across a distribution of datasetsp(D): through weight updates, but also modulate their output by learning to rapidly cache representations in memory stores θ∗ = argminθED p(D) [ (D;θ)]. (1) (Hochreiter & Schmidhuber, 1997). For example, LSTMs ∼ L trained to meta-learn can quickly learn never-before-seen quadratic functions with a low number of data samples To accomplish this, proper task setup is critical (Hochre- (Hochreiter et al., 2001). iter et al., 2001). In our setup, a task, or episode, in- T volves the presentation of some datasetD= d t t=1 = Neural networks with a memory capacity provide a promis- T { } (xt, yt) t=1. For classification,y t is the class label for ing approach to meta-learning in deep networks. However, { } an imagex t, and for regression,y t is the value of a hid- the specific strategy of using the memory inherent in un- den function for a vector with real-valued elementsx t, or structured recurrent architectures is unlikely to extend to simply a real-valued numberx t (here on, for consistency, settings where each new task requires significant amounts xt will be used). In this setup,y t is both a target, and of new information to be rapidly encoded. A scalable so- is presented as input along withx t, in a temporally off- lution has a few necessary requirements: First, information set manner; that is, the network sees the input sequence must be stored in memory in a representation that is both (x1, null),(x 2, y1),...,(x T , yT 1 ). And so, at timet the stable (so that it can be reliably accessed when needed) and − correct label for the previous data sample (yt 1) is pro- element-wise addressable (so that relevant pieces of infor- − vided as input along with a new queryx t (see Figure 1 (a)). mation can be accessed selectively). Second, the number The network is tasked to output the appropriate label for of parameters should not be tied to the size of the mem- xt (i.e.,y t) at the given timestep. Importantly, labels are ory. These two characteristics do not arise naturally within shuffled from dataset-to-dataset. This prevents the network standard memory architectures, such as LSTMs. How- from slowly learning sample-class bindings in its weights. ever, recent architectures, such as Neural Turing Machines Instead, it must learn to hold data samples in memory un- (NTMs) (Graves et al., 2014) and memory networks (We- til the appropriate labels are presented at the next time- ston et al., 2014), meet the requisite criteria. And so, in this step, after which sample-class information can be bound paper we revisit the meta-learning problem and setup from and stored for later use (see Figure 1 (b)). Thus, for a given the perspective of a highly capable memory-augmented episode, ideal performance involves a random guess for the neural network (MANN) (note: here on, the term MANN first presentation of a class (since the appropriate label can will refer to the class of external-memory equipped net- not be inferred from previous episodes, due to label shuf- works, and not other “internal” memory-based architec- fling), and the use of memory to achieve perfect accuracy tures, such as LSTMs). thereafter. Ultimately, the system aims at modelling the We demonstrate that MANNs are capable of meta-learning predictive distributionp(y t xt,D1:t 1;θ), inducing a cor- | − in tasks that carry significant short- and long-term mem- responding loss at each time step. ory demands. This manifests as successful classification This task structure incorporates exploitable meta- of never-before-seen Omniglot classes at human-like accu- knowledge: a model that meta-learns would learn to bind racy after only a few presentations, and principled function data representations to their appropriate labels regardless estimation based on a small number of samples. Addition- of the actual content of the data representation or label, ally, we outline a memory access module that emphasizes and would employ a general scheme to map these bound memory access by content, and not additionally on mem- representations to appropriate classes or function values ory location, as in original implementations of the NTM for prediction. (Graves et al., 2014). Our approach combines the best of two worlds: the ability to slowly learn an abstract method for obtaining useful representations of raw data, via gra- 3. Memory-Augmented Model dient descent, and the ability to rapidly bind never-before- 3.1. Neural Turing Machines seen information after a single presentation, via an external memory module. The combination supports robust meta- The Neural Turing Machine is a fully differentiable imple- mentation of a MANN. It consists of a controller, such as Meta-Learning with Memory-Augmented Neural Networks External Memory External Memory Class Prediction -2 2 -2 -4 ... ... + ... Backpropagated Shuffle: Signal (x ,y )(x ,y ) Labels (x ,0) (x ,y ) t t-1 t+1 t 1 2 1 xt: xt+1: xt+n: Classes yt-1: 3 yt: 2 yt+n-1: 1 Samples Episode Bind and Encode Retrieve Bound Information (a) Task setup (b) Network strategy Figure 1. Task structure. (a) Omniglot images (orx-values for regression),x t, are presented with time-offset labels (or function values), yt 1, to prevent the network from simply mapping the class labels to the output.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us