This Article Was Originally Published in the Encyclopedia of Neuroscience
Total Page:16
File Type:pdf, Size:1020Kb
This article was originally published in the Encyclopedia of Neuroscience published by Elsevier, and the attached copy is provided by Elsevier for the author's benefit and for the benefit of the author's institution, for non- commercial research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues who you know, and providing a copy to your institution’s administrator. All other uses, reproduction and distribution, including without limitation commercial reprints, selling or licensing copies or access, or posting on open internet sites, your personal or institution’s website or repository, are prohibited. For exceptions, permission may be sought for such use through Elsevier's permissions site at: http://www.elsevier.com/locate/permissionusematerial Wang X -J (2009) Attractor Network Models. In: Squire LR (ed.) Encyclopedia of Neuroscience, volume 1, pp. 667-679. Oxford: Academic Press. Author's personal copy Attractor Network Models 667 Attractor Network Models X-J Wang , Yale University School of Medicine, different functions (such as working memory and New Haven, CT, USA decision making), depending on the inputs and cogni- ã tive control signals. The attractor landscape of a neu- 2009 Elsevier Ltd. All rights reserved. ral circuit is readily modifiable by changes in cellular and synaptic properties, which form the basis of the attractor model for associative learning. Introduction In this article, we first introduce the basic concepts The term attractor is being increasingly used by neuro- of dynamical systems, attractors, and bistability using simple single-neuron models. Then, we discuss physiologists to characterize stable, stereotyped spatio- temporal neural circuit dynamics. Examples include attractor network models for associative long-term readily identifiable rhythmic activity in a central pat- memory,working memory,and decision making. Mod- tern generator; well-organized propagation patterns eling work and experimental evidence are reviewed and open questions are outlined. We show that of neuronal spike firing in cortical circuits in vivo or in vitro; self-sustained persistent activity during work- attractor networks are capable of time integration ing memory; and neuronal ensemble representation of and memory storage over timescales much longer than the biophysical time constants of fast electrical associative long-term memory. In these examples, the rich and complex neural activity patterns are generat- signals in neurons and synapses. Therefore, strongly ed largely through regenerative mechanism(s), and recurrent attractor networks are especially relevant to memory and higher cognitive functions. emerge as collective phenomena in recurrent networks. In part, interest in attractor networks arises from our growing appreciation that neural circuits are typically The Neuron Is a Dynamical System endowed with an abundance of feedback loops and that the attractor theory may provide a conceptual A passive nerve membrane is a dynamical system framework and technical tools for understanding such described by a simple resistance–capacitance (RC) strongly recurrent networks. circuit equation: The concept of attractors originates from the math- CmðdVm=dtÞ¼gLðVm À ELÞþIapp ½1 ematics of dynamical systems. Given a fixed input, a system consisting of interacting units (e.g., neurons) where Vm is the transmembrane voltage, Cm the capac- typically evolves over time toward a stable state. Such itance, gL the leak conductance (the inverse of the input a state is called an attractor because a small transient resistance), EL the leak reversal potential, and Iapp the perturbation alters the system only momentarily; injected current. In the absence of an input, the mem- afterward, the system converges back to the same brane is at the resting state, V ¼ E ,sayÀ70 mV.This ss L state. An example is illustrated in Figure 1, in which steady state is stable; if Vm is transiently depolarized or a neural network is described by a computational hyperpolarized by a current pulse, after the input offset energy function in the space of neural activity patterns it will evolve back to Vss exponentially with a time ¼ = and the time evolution of the system corresponds to constant tm Cm gL (typically 10–20 ms). Thus, Vss a movement down hill, in the direction of decreasing is the simplest example of an attractor. More generally, the computational energy. Each of the minima of the for any sustained input drive Iapp, the membrane ¼ þ = energy function is thus a stable (attractor) state of always has a steady-state Vss VL Iapp gL,givenby the system; a maximum at the top of a valley is an dVm/dt ¼ 0 (i.e., Vss does not change over time). The unstable state. Such a depiction is not merely sche- behavior of this passive membrane changes quantita- matic, but can be rendered quantitative for certain tively with input current, but remains qualitatively the neural models. same; the dynamics is always an exponential time The characterization stable and stereotyped is course (determined by tm) toward the steady-state sometimes taken to imply that an attractor network Vss, regardless of how high is the current intensity Iapp is not sensitive to external stimuli and is difficult to or how large is the capacitance Cm or the leak conduc- reconfigure. On the contrary, as was shown by recent tance gL. Moreover, the response to a combination of studies, attractor networks are not only responsive to two stimuli I and I is predicted by a linear sum of the 1 2 inputs, but may in fact to instrumental to the slow individual responses to I1 or I2 presented alone. These time integration of sensory information in the brain. characteristics are generally true for a linear dynamical Moreover, attractors can be created or destroyed by system, such as a differential equation with only linear (sustained) inputs; hence, the same network can serve dependence on Vm. Encyclopedia of Neuroscience (2009), vol. 1, pp. 667-679 Author's personal copy 668 Attractor Network Models Figure 1 Schematic of an attractor model of neural networks. Computational energy function is depicted as a landscape of hills and valleys plotted against the neural activity states (on the XY plane). The synaptic connections and other properties of the circuit, as well as external inputs, determine its contours. The circuit computes by following a path that decreases the computational energy until the path reaches the bottom of a valley, which represents a stable state of the system (an attractor). In an associative memory circuit, the valleys correspond to memories that are stored as associated sets of information (the neural activities). If the circuit is cued to start out with approximate or incomplete information, it follows a path downhill to the nearest valley (red), which contains the complete information. From Tank DW and Hopfield JJ (1987) Collective computation in neuronlike circuits. Scientific American 257: 104–114. More interesting behaviors become possible when dynamical system gradual changes in a parameter nonlinearity is introduced by the inclusion of (gNaP) can give rise to an entirely new behavior voltage-gated ionic currents. For instance, if we add (bistability). in the RC circuit a noninactivating sodium current Attractor states are stable under small perturba- INaP ¼ g NaPmNaPðVmÞðVm À ENaÞ, where the conduc- tions, and switching between the two can be induced tance exhibits a nonlinear (sigmoid) dependence on only with sufficiently strong inputs. How strong is Vm, themembrane dynamics becomes strong enough? The answer can be found by plotting the total ion current Itot ¼ IL þ INaP À Iapp against CmðdVm =dtÞ¼ÀgLðVm À ELÞ Vm, called the I–V curve (Figure 2(b), top, with À ð Þð À Þ ¼ gNaPmNaP Vm Vm ENa Iapp 0). Obviously, a Vm is a steady state if ð Þ¼ ¼ þ Iapp ½2 Itot Vm 0 (thus dVm/dt 0). As seen in Figure 2(b) (blue lines), the two attractors (filled circles) are sepa- This system is endowed with a self-excitatory mecha- rated by a third steady state (open circle). The third nism: a higher Vm leads to more INaP, which in turn state is unstable – if Vm deviates slightly from it, the produces a larger depolarization. If gNaP is small, the system does not return but converges to one of the weak positive feedback affects the membrane dynamics two attractors. Indeed, if Vm is slightly smaller, Itot is only slightly (Figure 2(b), red lines). With a sufficiently positive (hyperpolarizing), so Vm decreases toward large gNaP , the steady state at VDown ’70 mV is still VDown. Conversely, if Vm is slightly larger, Itot is neg- stable because at this voltage INaP is not activated. ative (depolarizing), so Vm increases toward VUp. However, the strong positive feedback gives rise to Therefore, an external input must be strong enough a second, depolarized plateau potential (at VUp to bring the membrane potential beyond the unstable ’20 mV) (Figure 2(b), blue lines). Therefore, the steady state to switch the system from one attractor membrane is bistable; a brief input can switch the sys- state to the other. tem from one attractor state to another (Figure 2(a)). It is worth noting that the attractor landscape As a result, a transient stimulus can now be remem- depends not only on the strength of the feedback bered for a long time, in spite of the fact that the mechanism (the value of gNaP) but also sustained system has only a short biophysical time cons- inputs. As shown in Figure 2(c), with a fixed gNaP a tant (20 ms). Unlike linear systems, in a nonlinear constant applied current Iapp shifts the I–V curve up Encyclopedia of Neuroscience (2009), vol. 1, pp. 667-679 Author's personal copy Attractor Network Models 669 or down. Either a hyperpolarization or depolariza- 50 ms 20 mV tion can destroy the bistability phenomenon. This simple example demonstrates that neuronal bistable dynamics can be readily reconfigured by external inputs.