Cooperation Across Timescales Between and Hebbian and Homeostatic Plasticity
Total Page:16
File Type:pdf, Size:1020Kb
This position paper has not been peer reviewed or edited. It will be finalized, reviewed and edited after the Royal Society meeting on ‘Integrating Hebbian and homeostatic plasticity’ (April 2016). Cooperation across timescales between and Hebbian and homeostatic plasticity Friedemann Zenke1 and Wulfram Gerstner2 April 13, 2016 1) Dept. of Applied Physics, Stanford University, Stanford, CA 2) Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL Abstract We review a body of theoretical and experimental research on the interactions of homeostatic and Hebbian plasticity, starting from a puzzling observation: While homeostasis of synapses found in experiments is slow, homeostasis of synapses in most mathematical models is rapid, or even instantaneous. Even worse, most existing plasticity models cannot maintain stability in simulated networks with the slow homeostatic plasticity reported in experiments. To solve this paradox, we suggest that there are both fast and slow forms of homeostatic plasticity with distinct functional roles. While fast homeostatic control mechanisms interacting with Hebbian plasticity render synaptic plasticity intrinsically stable, slower forms of homeostatic plasticity are important for fine-tuning neural circuits. Taken together we suggest that learning and memory relies on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits. Introduction DRAFT Homeostasis refers to a group of physiological processes at different spatial and temporal scales that help to maintain the body, its organs, the brain, or even individual neurons in the brain in a target regime where they function optimally. A well-known example is the homeostatic regulation of body temperature in mammals, maintained at about 37 degrees Celsius independent of weather condition and air temperature. In neuroscience, homeostasis or homeostatic plasticity often refers to the homeostatic control of neural firing rates. In a classic experiment, cultured neurons that normally fire at, say 5Hz, change their firing rate after a modulation of the chemical conditions in the culture, but eventually return to their target rate of 5Hz during the following 24 hours (Turrigiano and Nelson, 2000). Thus, the experimentally best-studied form of synaptic homeostasis happens on a slow timescale of hours or days. This slow form of synaptic homeostasis manifests itself as a 1 rescaling of the strength of all synapses onto the same neuron by a fixed fraction, for instance 0.78, a phenomenon called “synaptic scaling” (Turrigiano et al., 1998). Mathematical models of neural networks also make use of homeostatic regulation to avoid run- away of the strength of synaptic weights or over-excitation of large parts of a neural network. In many mathematical models, homeostasis of synaptic weights is idealized by an explicit normalization of weights: if the weight (or efficacy) of one synaptic connection increases, weights of other connections onto the same neuron are algorithmically decreased so as to keep the total input in the target regime (Miller and MacKay, 1994). Algorithmic normalization of synaptic strength can be interpreted as an extremely fast homeostatic regulation mechanism. At a first glance, the result of multiplicative normalization (Miller and MacKay, 1994) is identical to “synaptic scaling” introduced above. Yet, it is fundamentally different from the experimentally studied form of synaptic homeostasis mentioned above, because of the vastly different timescales. While rescaling in the model is instantaneous, in biology the effects of synaptic scaling manifest themselves only after hours. Interestingly, researchers who have tested slow forms of homeostasis in mathematical models of plastic neural networks failed to stabilize the network activity. Thus, we are confronted with a dilemma: Theorists need fast synaptic homeostasis while ex- perimentalist have found homeostasis that is much slower. In this review we try to answer the following questions: Why do we need rapid forms of homeostatic synaptic plasticity? How fast does a rapid homeostatic mechanism have to be, hours, minutes, seconds or less? Furthermore, what are the functional consequences of fast homeostatic control? Can the combination of fast homeostasis with Hebbian learning lead to stable memory formation? And finally, if rapid homeostasis is a requirement, what is the role of slower forms of homeostatic plasticity? Conceptually, automatic control of a technical system is designed to keep the system in the regime where it works well, which is called the “working point” or the “set point” of the system. Analogously, homeostatic regulation of synapses is important to maintain the neural networks of the brain in a regime where they function well. However, in specific situations our brain needs to change. For example, when we exit the subway at an unknown station, when we are confronted with a new theoretical concept,DRAFT or when we learn how to play tennis, we need to memorize the novel environment, the new concept, or the unusual movement that we previously did not know or master. In these situations, the neural networks in the brain must be able to develop an additional desired “set point”, ideally without loosing the older ones. In neuroscience it is widely accepted that learning observed in humans or animals at the be- havioral level corresponds, at the level of neural networks, to changes in the synaptic connections between neurons (Morris et al., 1986; Martin et al., 2000). Thus, learning requires synapses to change. A strong form of synaptic homeostasis, however, keeps neurons and synapses always in the “old” regime, i.e. the one before learning occurred. In other words, strong homeostasis prevents learning. A weak or slow form of homeostasis, on the other hand, may not be sufficient to control the changes induced by memory formation. Before we can solve the riddle of stability during mem- 2 ory formation, and before we can answer the above questions, we need to classify known forms of synaptic plasticity. Classification of Synaptic Plasticity Synaptic plasticity, can be classified using different criteria. We consider three of these: Functional relevance, synaptic specificity, and timescale of effect duration. (i) Classification by potential function: Memory formation or homeostasis Functionally, synaptic plasticity can be classified as useful either for homeostatic regulation or for memory formation and action learning. Many experiments on synaptic plasticity were based on Hebb’s postulate which suggests that synaptic changes, caused by the joint activity of pre- and postsynaptic neurons, should be useful for the formation of memories (Hebb, 1949). Inspired by Hebb’s postulate, classical stimulation protocols for long-term potentiation (Bliss and Lomo, 1973; Malenka and Nicoll, 1999; Lisman, 2003), long-term depression (Lynch et al., 1977; Levy and Stew- ard, 1983), or spike-timing dependent plasticity (Markram et al., 1997; Bi and Poo, 1998; Sjöström et al., 2001), combine the activation of a presynaptic neuron, or a presynaptic pathway, with an activation, depolarization, or chemical manipulation of the postsynaptic neurons, to induce synaptic changes. In parallel, theoreticians have developed synaptic plasticity rules (Willshaw and Von Der Mals- burg, 1976; Bienenstock et al., 1982; Kempter et al., 1999; Song et al., 2000; Pfister and Gerstner, 2006; Clopath and Gerstner, 2010; Brito and Gerstner, 2016; Shouval et al., 2002; Graupner and Brunel, 2012), in part inspired by experimental data. Generically, in plasticity rules of computa- tional neuroscience the change of a synapse from a presynaptic neuron j to a neuron i is described as d w = F (w ; post ; pre ) (1) dt ij ij i j where wij is the momentary “weight” of a synaptic connection, posti describes the state of the postsynaptic neuron (e.g. its membraneDRAFT potential, calcium concentration, spike times, or firing rate) and prej is the activity of the presynaptic neuron (Brown et al., 1991; Morrison et al., 2008; Gerstner et al., 2014). A network of synaptic connections with weights consistent with Hebb’s postulate can form associative memory models (Willshaw et al., 1969; Little and Shaw, 1978; Hopfield, 1982; Amit et al., 1985; Amit and Brunel, 1997), also called attractor neural networks (Amit, 1989). However, in all the cited theoretical studies, learning is supposed to have happened somewhere in the past, while the retrieval of previously learned memories is studied under the assumption of fixed synaptic weights. The reason is that a simple implementation of Hebb’s postulate for memory formation is not sufficient to achieve stable learning (Rochester et al., 1956). Without clever homeostatic control of synaptic plasticity, neural networks with a Hebbian rule always move to an undesired state where all synapses go to their maximally allowed values and 3 all neurons fire beyond control — or to the other extreme where all activity dies out. Therefore, homeostasis in the sense of keeping the network in a desired regime, is a second and necessary function of synaptic plasticity. Candidate plasticity mechanisms for the homeostatic regulation of network function by synaptic plasticity include slow synaptic scaling (Turrigiano et al., 1998; Turrigiano and Nelson, 2000), rapid renormalization of weights (Miller and MacKay, 1994), or other mechanisms discussed below (Zenke et al., 2013, 2015; Chistiakova et al., 2015); but also synaptic plasticity of inhibitory interneurons