Reconstructive Memory for Abstract Selective Recall

Reconstructive Memory for Abstract Selective Recall

CS379C Spring 2018, Stanford University RECONSTRUCTIVE MEMORY FOR ABSTRACT SELECTIVE RECALL Catherine Wong Department of Computer Science Stanford University [email protected] ABSTRACT Biological episodic memory is context-dependent and reconstructive. Despite well-documented resource constraints and inaccuracies, humans can recall richly detailed events at varying levels of abstraction, remember individually salient fea- tures and abstract general gists, leverage learned knowledge to form memories in circumstances with sparse or limited information, and reconstruct and combine parts of prior experiences to guide complex future behavior. This work presents a computational mechanism for reconstructive memory that could enable simi- larly selective, hierarchically feature-addressable recall. We propose an architec- ture that integrates a recurrent neural network controller with both reconstructive memory and a content-addressable working memory. Finally, we suggest methods for evaluation and avenues for future research using reconstructive memory as a baseline for memory consolidation and compositional future planning. 1 INTRODUCTION Modern deep neural networks excel when they can learn slowly and act quickly. On tasks that require pattern recognition or short-term decision making, such as image classification (Razavian et al., 2014) or phrase translation (Sutskever et al., 2014), even relatively simple models trained on sufficiently large datasets have proven incredibly (even unreasonably!) effective (LeCun, 2014). In both biological and artificial neural networks, however, researchers have long suggested that explicit, quickly-writable memory plays an important role in enabling broader intelligence (Eichen- baum, 1993). Machines and organisms that can rapidly encode the world into memory can later draw on those memories to learn new knowledge quickly from a few examples (McClelland et al., 1995). Memories can allow a system to reason in complex ways over variable-like bits of informa- tion (Feldman, 2013), or perform tasks that require storing and accessing information over longer timescales, such as writing a computer program, reading and understanding a great but hefty novel, or deciding how to act now, based on the experiences accumulated over a lifetime. The idea of combining neural networks with a more rapidly writable memory is not new. Early neural network architectures introduced the complementary learning systems framework to combat issues of catastrophic forgetting in connectionist models, inspired by evidence that the hippocampus and neocortex act similarly to support short and longer term learning (McClelland et al., 1995). More recently, newer memory-augmented neural networks (Graves et al., 2016; 2014; Sprechmann et al., 2018; Wayne et al., 2018) have revived and refined methods of building deep learning systems with external memories. In these architectures, however, the challenges of incorporating a useful external memory system generally fall along two largely distinct axes: Deciding how to remember. Many recent systems formalize the problem of machine memory using a fully differentiable Von Neumann-like architecture. In these systems, such as the Differentiable Neural Computer and Neural Turing Machine (Graves et al., 2016; 2014), the entire system is opti- mized end to end to maximize reward obtained from solving a memory-intensive task, while learning how to write and read from its external memory along the way. While successful at some tasks, the complexity and underconstrained nature of end to end optimization can make these systems very dif- ficult to train. Worse, a network that learns to remember only in service of immediate rewards may 1 CS379C Spring 2018, Stanford University be woefully short-sighted, with no incentive to remember latent information that could be useful for metalearning and future tasks (Wayne et al., 2018). More recent architectures (Sprechmann et al., 2018; Wayne et al., 2018) separate the process of encoding and using memory. Often directly inspired by evidence that the brain is composed of spe- cialized modules optimized towards functionally diverse ends, these architectures generally lever- age separately trained encoder networks to form memories and controller networks to select and use them. However, when restricting how information is compressed and stored into memories, these architectures face another challenge: Deciding what to remember. Architectures that leverage compressive encoding for content-based memory retrieval must balance the demands of efficient and effective content-based lookup with domain specificity. In general, these systems must decide at the time that the memory is encoded which details matter. Inevitably, this means that the system must risk either throwing important details away, or face the difficulty of handling very large memories using content-based lookup. If under-compressed, the system will need to generate more precise lookup keys to retrieve complex memories, and may be hard pressed to generate the correct cues to recall useful information later. In this paper, we propose combatting both of these challenges through a system capable of both content-based and selectively reconstructive memory. While research into the neurological mech- anisms of memory is still ongoing and far from complete, ample evidence suggests that biological memories are both reconstructive and generative (Schacter, 2012). Specifically, we draw on two ideas from cognitive science and neuroscience: 1. hippocampal indexing theory, which proposes that memories can be stored as memory traces of activations in encoding neural networks, and re- stored by reactivating those same neurons (Teyler & Rudy, 2007), and 2. evidence that encoding networks decompose sensory information into hierarchical, discriminative features at different lay- ers of abstraction (Zeiler & Fergus, 2014). In our system, we propose augmenting a neural network with both a content-addressable working memory and a longer-term reconstructive memory accessible by specifying specific subsets of en- coding neurons that produced a memory trace. That is, the system should mitigate the tradeoff of deciding what and how much to remember by remembering as much as possible, and then lever- aging the hierarchically abstract structure of the encoding network to learn where in its memory to look later on. The main contributions of this work are to: 1. Introduce a mechanism for reconstructive memory, using the same sensory network for encoding and decoding memories. 2. Introduce a mechanism for neuronal-unit based recall using reconstructive memory, in which a controller learns to specify which parts of a full memory trace it should recon- struct. 3. Propose a sample integrated architecture that makes use of this memory scheme to com- bine a reconstructive, neuronal-unit based memory and a smaller, content-based working memory. This paper is intended as a proposal, and focuses primarily on mechanisms for reconstructive mem- ory storage and recall. However, there is clearly room to both further develop the model and refine these mechanisms for practical use, and we discuss possibilities for future work at the end of the paper. 2 SYSTEM OVERVIEW At a high level, the model (Figure 1) consists of two trainable components: 1. An encoder, which both encodes perceptual inputs into memory and also functions as a reconstructive decoder, and 2. A controller, which decides how to access and use its memories. These components interact with two memory banks: 1. A small, content-addressable working memory, and 2 CS379C Spring 2018, Stanford University 2. A larger bank of memory traces, accessible by specifying subsets of activations to recall in any memory trace. We describe these components in detail below. Figure 1: The proposed model architecture, with a recurrent neural network controller, an encoder that encodes perceptual inputs into memory and functions as a reconstructive decoder, a content- addressable working memory, and a reconstructive memory bank. 2.1 ENCODER In this work, as a running example, we will focus on the encoding and reconstructive recall of visual memories. We consider an encoder network as a simple multi-layered convolutional neural network, pre-trained on a simple object classification objective. Convolutional neural networks (CNNs) have consistently produced excellent results on visual tasks even with relatively simple training objectives (Krizhevsky et al., 2012), and evidence suggests that the dynamics of deep CNNs may resemble hier- archical visual encoding in the human visual ventral stream (Yamins et al., 2014). More importantly for this work, a large body of work has demonstrated that neuronal activations at intermediate layers of these networks can produce relatively disentangled, hierarchically abstract, and even semantically interpretable features (Zeiler & Fergus, 2014) - for example, activations at initial layers generally respond to semantically simple visual features such as edges and basic shapes, whereas later layers may respond to increasingly abstract features, such as eyes or dog ears. Additionally, deconvolu- tional methods exist to reconstructively map activities at intermediate layers back to the input pixel space (Zeiler et al., 2010). More generally, however, the same idea of reconstructive decoding can be extended to encoders for other applications (such as text decoders (Wayne et al., 2018)), and easily encompasses archi- tectures optimized for different cost functions, such as predictive

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us