arXiv:0906.4549v2 [q-bio.NC] 13 Nov 2009 neo oiiefebc.Ta s h tograsynapse a stronger the is, That feedback. positive of ence the 14]. of fir- 13, that 5, the precedes [4, as one postsynaptic occur it presynaptic the can fact, of efficacy In time to synaptic ing potentiation later. the from the of change firing sharp depression a in for- neu- that part the discovered presynaptic take for was the neuron actually postsynaptic to is, the mer That before requirement fire correlation must process. ron the learning to the causation adding of a plasticity, of plays synaptic element spikes the an these determining in of role timing great exact was its the It when that state [12]. example, discovered value the threshold for a when criteria, reaches potential certain produced membrane meets are neuron ac- a that that of “spikes”, the potentials by However, action described are is, [11]. of neurons activities of adequate tual views generally is rate-based which for together” that An wire “cells of together, [10]. that states fire increases firing theory so the the doing of in version of oversimplified the partakes ability Among neuron its a neuron, 9]. another when the- 8, Hebbian that known, 7, that states best [6, ory, activities, the learning, experience its neural can from of of connections learn theories history can its the it of to neu- is, strength response functional 5]. the a in that 4, of change is aspects [3, network important form ral most of can for the study of networks opportunities One the new these in up how progress opening understanding Recent is plasticity 2]. networkssynaptic have [1, these Theo- networks how function of neural might understanding of survival. our dynamics to to the of contributed crucial of and networks is studies , are retical functioning connecting These proper the their through find. formed can neurons one organs complex nipratfaueo eba erigi h pres- the is learning Hebbian of feature important An most the perhaps are animals living of brains The eateto hsc n srnm,Uiest fPittsbu of University Astronomy, and Physics of Department ftefis-re rniinfrlwnme faeetsynap afferent of respectively. number distribution, low considered. weight for synaptic also transition are first-order sizes the the of step of plasticity The weight and self-consistently. synaptic conductance weight synaptic the mean on the distribution determine ra describin the plasticity values of the parameter of dependence physiological root pr square plausible we the steps, given with small scales asy for that an o master-equation weights of a dynamics dynamics Using the plasticity to space. fir the weight persistent mapped a be of neurons, can predicts environment regime the which mean-field a for weight, and statistics synaptic t regime firing mean obtain noise-dominated the to a of spike-timin system between function following the a neurons to theory as the mean-field of self-consistent activities a the of sult enfil hoyo lsi ewr fintegrate-and-fire of network plastic a of theory Mean-field ecnie os rvnntoko nert-n-r neu integrate-and-fire of network driven noise a consider We .INTRODUCTION I. hnCugCe n ai Jasnow David and Chen Chun-Chung Dtd oebr1,2009) 13, November (Dated: n h tutr n ucino iigbanas in- also brain living the a at of studies function and clude understand- structure at the aimed efforts ing above, discussed as neurons ino yatcwihsfe fpl-p mn h many the a among pile-ups of of synapses free afferent weights distribu- synaptic uni-modal of a produce tion Such to shown process. synaptic was multiplicative mechanism current a a the is to depression process while weight additive an is tentiation rvnb oso pkn ore [22]. sources spiking Poisson by driven rsinpoesscudbe could de- from and processes proposed potentiation been pression cut- the has the that it observations softening Alternatively, experimental by 23]. mitigated 18, be [11, Runaways off often 22]. can bio- 21, real 20, pile-ups 19, in or 16, dis- observed [15, weights not networks is synaptic neural that of logical cutoffs pile-up the a near in stud- tributed by result simulation justified value often in be they cells, cutoff might ies the cutoffs preset of such sus- limitations a While is physiological 18]. exceeds devices. 17, weight a 16, other [15, its of or potentiation when the cutoffs pended schemes, of some Often curbed introduction In often possibil- are the potentiated. a which weights, be presents through synaptic will this runaway of networks, it ity plastic that simulated likely in more the is, ehnsste olw2] hl ti elunderstood well microscopic is of it set While the follow[25]. from of they expected allows states mechanisms macroscopic be often what can it of systems grasp that the quick a in obtain valuable to proven one by been employed mean-field has tools systems, theory matter various condensed Among observation studying our theorists macroscopic. with our the scales microscopic bridging two of of the sys- the challenges matter of similar of condensed as understanding faced of contrast brain has study The the which the tems, of to 24]. analogous entirety very [10, the is by organ played vital role a or the regions of anatomical even different between interactions the eie iwn brains viewing Besides erltasiso n lsiiy The plasticity. and transmission neural g ffc fflcutosi h oa synaptic total the in fluctuations of effect -eedn lsiiyrls eapply We rules. plasticity g-dependent g,Ptsug,Pnslai 56,USA 15260, Pennsylvania Pittsburgh, rgh, efrtesainr tt ftesystem the of state stationary the for te uhflcutosrsl nasmoothing a in result fluctuations Such yas ne h nuneo the of influence the under synapse a f e e ernadabodnn fthe of broadening a and neuron per ses dc arwdsrbto fsynaptic of distribution narrow a edict ema ciiylvlfrtesystem the for level activity mean he enfil niomn losu to us allows environment mean-field erlatvt.Asmn Poisson Assuming activity. neural os h ewr vle sre- as evolves network The rons. mti admwl nsynaptic- in walk random mmetric todrtasto n hysteresis and transition st-order macroscopic igeneuron single microscopically asymmetric neurons cl,freapeof example for scale, hnteipt are inputs the when uhta po- that such , sntok of networks as 2 that mean-field theories could fail to predict correct scal- iments represents an important step and will continue ing behavior of systems in critical states, where fluctu- to provide important insights into the behavior of more ations and long-range correlations are important, they complicated networks. Our aims are to improve the un- generally are adequate (and often the first step in a sys- derstanding of the stationary, statistical properties of tematic procedure) in qualitatively describing the system such plastic network and ultimately address dynamical in stable phases, where fluctuations are of limited range, behavior during formation. Furthermore, cultured net- and useful in revealing the structure of the phase spaces works are on a scale approachable by the modeling, nu- of the systems. The later is especially desirable for biolog- merical and analytic work such as presented here. By ical systems where large numbers of empirical parameters analogy with a variety of familiar statistical mechanical and, consequently, vast phase spaces are often involved models, sufficiently large homogeneous networks in which in microscopic models of the systems. the number of “neighbors” of a particular element grows In the current study, we consider a network of proportionally to the number of elements in the ther- integrate-and-fire neurons[26] driven by Poisson noise of modynamic limit, are expected to be well-described by fixed frequency for all neurons. The interaction between the type of mean-field analysis employed in this study the neurons is taken to follow the neural transmission (see, e.g., chapter 3 of [33]). In a separate publication model proposed by Tsodyks, Uziel, and Markram [27] we will present results of extensive numerical simulations (the TUM model), which can account for the saturation on integrate-and-fire (and other representative neuron effect of neural transmitter and the short-term depression model) networks, the results of which can be put in the of synaptic conductance. While our approach can han- proper perspective via comparison with the calculations dle alternative choices, the synaptic weights between the presented here and with results of in vitro experiments. neurons are allowed to evolve following the spike-timing- The remainder of this paper is laid out as follows. dependent plasticity rules proposed by Bi and coworkers In Section II, we describe the model of the plastic net- [22, 28]. A mean-field theory is used to determine the work, which consists of (i) the dynamics of the mem- self-consistent average firing rate of the neurons. The brane potential, (ii) the model of synaptic transmission, noise driven firings dominate the small synaptic weight and (iii) the plasticity rules. In Section III, we apply regime, while self-sustaining firing activity is triggered in the mean-field method to a network with fixed synaptic the large synaptic weight regime. As the mean synap- weights and determine the state of the system through tic weight of the network is varied, the mean-field theory response functions deduced from the dynamics of neuron predicts a hysteresis for the transition between regimes of and synaptic transmission under a given mean synaptic noise dominance and self-sustaining activity. Within the weight. In Section IV, we map to a mean-field environment, assuming the neurons are firing random walk process in synaptic-weight space, determine with Poisson statistics, the dynamics of a single synapse the stationary synaptic-weight distribution, and close the can be viewed as a random walk process in synaptic- self-consistency in the mean synaptic weight of the net- weight space. Under the small-jump assumption, we work. In Section V, we expand the mean-field theory to use the master equation to calculate the drift and dif- include the consideration of fluctuations in total synap- fusion coefficient for the random walk. The resulting tic current of a neuron and fluctuations in step sizes of Fokker–Planck equation allows us to predict the station- synaptic weight changes. Finally, we summarize and con- ary synaptic-weight distribution of the process and close clude in Section VI. the self-consistency with the requirement that the sta- tionary distribution reproduces the mean synaptic weight characterizing the mean-field environment. For the plas- II. MODEL ticity rules and ranges of the parameters we considered, the synaptic weights form a narrow distribution, hav- We consider a noise-driven plastic network of integrate- ing a width proportional to square root of the plasticity and-fire neurons. The neurons are coupled using the rate. Finally, we extend the mean-field approximation TUM model of neural transmission [27, 34], described to include the effect of fluctuations in the synaptic con- below. The noise is modeled by randomly forced firing ductance as well as the variations of jump sizes in the of the neurons following Poisson statistics at a fixed fre- random walk of synaptic weights. quency. Of course, despite our mention of brains of living crea- tures in our introduction, the current analysis cannot hope to address issues of dynamics involved in such a venue for myriad reasons. Among others, here we imag- A. Integrate-and-fire neuron ine homogeneous, stationary networks, which is surely not the case in brain. However, there are continuing, The integrate-and-fire model is a single-compartment revealing experiments on cultured neural networks with neuron model where the state of a neuron i is described perhaps several hundred individual neurons in which all- by a membrane potential Vi. The dynamics of the mem- to-all coupling is not an unreasonable approximation or brane potential follows the differential equation of a leaky starting point [28, 29, 30, 31, 32]. This class of exper- integrator [35] 3

a synapse with presynaptic neuron j and postsynaptic neuron i, these fractions follow the dynamics [27] dV τ i = V − V + R I , (1) m dt 0 i m syn dX Z j,i = j,i − uS X where τm is the leak time for the membrane charge, which j j,i dt τR is given by the product of the total membrane capaci- dYj,i Yj,i tance Cm and resistance Rm, while V0 is the resting po- = − + uSj Xj,i (6) tential when the neuron is in the quiescent state. The dt τD dZj,i Yj,i Zj,i total synaptic current Isyn is a sum over each afferent = − , synapse j for the neuron i dt τD τR 1 I = w Y (R − V ) , (2) where τ is the decay time of active transmitters to the syn R j,i j,i j,i i D m j inactive state, τ is the recovery time for the inactive X R transmitters to the ready-to-release state, and u is the where, for the synapse connecting neuron j to i, wj,i is fraction of ready-to-release transmitters that is released the synaptic weight, Rj,i is the reversal potential for the to the active state by each presynaptic spike. With the ion channels, and Yj,i is the fraction of active transmit- conservation rule ters. The dimensionless synaptic weight wj,i can be inter- preted as the maximum synaptic conductance, achieved when Yj,i = 1, measured in units of the membrane con- Xj,i + Yj,i + Zj,i =1, (7) −1 ductance Rm of the neuron. In the current study, we consider cases in which there is only one type of ion there are two independent variables per synapse. Since channel at all synapses so that they share the same re- the multiplying factor of the spike train Sj in the dynam- versal potential Rj,i = R. In these cases, the difference ics of Xj,i depends on Xj,i itself, which is discontinuous of membrane potential from the reversal potential can be when there is a δ-function in Sj due to a spike at given factored out and Eq. (1) becomes time t, we must specify how the value of Xj,i should be evaluated at the time of the spike. Consistent with the dVi τm = V0 − Vi + Gi (R− Vi) , (3) TUM dynamics, the values of the factors multiplying Sj dt at the discontinuities are to be evaluated immediately where before the discontinuities. The TUM model is very flexible. Since the variable Gi ≡ wj,iYj,i (4) Y represents a fraction in the TUM model, its value can j X saturate. As a result, an increase in the firing of the −1 presynaptic neuron can only produce a less-than-linear is the total synaptic conductance (in units of Rm ). increase of the active transmitters. However, a linearity In this model, a neuron fires when its membrane poten- of active transmitters on the firing rate can be recovered tial reaches a threshold value, Vth. Then, its membrane in the u → 0 limit, maintaining the average level of the potential drops immediately to a reset value, Vr. The ac- synaptic conductance by keeping the product wj,iu con- tion potential of the integrate-and-fire model is assumed stant. Additionally, the presence of the inactive state, Z, to be instantaneous and is not modeled explicitly. The mimics the short-term depression of neural transmission, spike train produced by the neuron i is defined as the where repeated firings of the presynaptic neuron over a function short period of time will reduce the maximum value at- tainable by Y temporarily through depositing the trans- Si ≡ δ (t − ti,n) (5) mitters into the inactive state before their recovery. Fi- n X nally, manipulating τR/τD can reproduce a model with a where ti,n is the time when the neuron i fires for the n-th single, saturating “species” Y (see below). time. Since the dynamics (6) depend only on the spike train of the presynaptic neuron j, without external disturbance all the efferent synapses of a neuron should have the same B. Tsodyks–Uziel–Markram model of neural values of transmitter fractions. Thus, we can drop the transmission subscript i from dynamics (6) and regard these fractions as the properties of the presynaptic neuron j. Such sim- The fractions Yj,i of the active transmitters are de- plification is not applicable for transmitter dynamics that scribed by the TUM model [27] of neural transmission, depend, for example, on the postsynaptic neuron, that where the transmitters are distributed in three states: have synaptic-weight-dependent parameters, or synapse- “active”, with the fraction Y ; “inactive”, with the frac- dependent noise. These complications are not considered tion Z; and “ready-to-release”, with the fraction X. For here. 4

C. Noise τσ of the respective timing chemicals. With the choice fσ = 1, the values of A or B can increase linearly with the While the integrate-and-fire and TUM dynamics are frequency of spikes without limit. On the other hand, for both deterministic, we model the stochasticity of the net- the choice fσ =1 − σ, the value of σ will never exceed 1, work with additional noise driven firing events following its value immediately after a single spike. It is reasonable Poisson statistics with the frequency λN for each neu- to expect, in a realistic setting, that the values of A or B ron. The noise driven firings are treated the same way will increase with increasing spike rates, but they should as threshold firings, that is, the membrane potentials are saturate and be limited to some physiologically deter- brought instantaneously to the reset value Vr and the mined maximum values with further increase in spike firing times are included in the spike trains (5) of the rates. Assuming spike increments fσ = uσ (1 − σ) with transmitter dynamics (6). uσ < 1 can produce this behavior. The parameter uσ can be interpreted as the mean fraction of chemicals released at each presynaptic firing relative to the amount that can D. Spike-timing-dependent plasticity possibly be produced without σ exceeding its maximum value of 1. It has been observed experimentally that the change of Additionally, certain effects of fatigue can also be ex- synaptic efficacy depends on the precise timing between pected in synaptic plasticity. [38] That is, while σ (=A the presynaptic and postsynaptic spikes [13]. When a or B) can saturate when the spike rate increases, the sat- presynaptic spike precedes a post-synaptic spike, follow- uration value itself can be expected to decrease when a ing van Rossum, Bi, and Turrigiano [22], we take the high spiking rate is sustained for an extended period of synapse to be potentiated by an amount proportional to time [27, 34]. Such effect can be modeled by assuming e−∆t/τA where ∆t is the timing difference between the an additional inactive state Iσ, which takes up a fraction of σ that can be produced for each spike, so that the in- two spikes and τA is the size of the potentiation time window. Similarly when a postsynaptic spike precedes a crement becomes fσ = uσ (1 − σ − Iσ). One can assume presynaptic spike, the synapse is taken to be depressed by simple dynamics for Iσ, given by −∆t/τB an amount proportional to e where τB is the size dIσ σ Iσ of the depression time window [22, 28]. One reasonable = − (10) dt τIσ τRσ mechanism for the cell to determine the interval between −1 spikes is to imagine the cell evaluates the concentration so that Iσ is fed by the presence of σ with the rate τIσ of a decaying chemical species released at the first spike and decays with a recovery time constant τRσ . One ex- [36]. Following such a mechanism, the synaptic changes pects that the rate of σ’s feeding into the inactive state proposed in [22] can be produced for the synaptic weight [first term on the right of Eq. (10)] should be less than the wj,i by the dynamical equation rate of its own decay [first term on the right of Eq. (9)], so we require the condition τIσ ≥ τσ. dw j,i = ∆A S − rw B S , (8) The considerations outlined above conspire to produce dt j i j,i i j a variant of the TUM model similar to what we have used for the neural transmitters apart from possibly different where ∆ is the potentiation constant for causal spike values of the constants involved. In the current simula- pairs, r is the depression factor for anti-causal spike pairs, tion study of plastic integrate-and-fire networks, we re- while As and Bs are the amplitudes of such (assumed) tain the qualitative structure of the system but simplify chemical species controlling the magnitude of potentia- by assuming τ = τ = τ , τ = τ , and u = u so tion and depression. These amplitudes are assumed to σ Iσ D Rσ R σ that the same values of Y calculated for the neural trans- follow the dynamics [36] i mitter in the TUM model can be used as surrogates for A and B in the synaptic plasticity yielding dσj σj i i = − + fσSj, σ = A, B (9) dt τσ dw j,i = ∆Y S − rw Y S . (11) dt j i j,i i j where different choices of the spike increments fσ lead to different schemes of spike pairing. For example, fσ = 1 These simplifications reduce the amount of computa- leads to “all-to-all” pairing of presynaptic and postsynap- tion required without sacrificing qualitative and semi- tic spikes while fσ = 1 − σ leads to “nearest-neighbor” quantitative analysis. pairing of spikes [36]. We note the stochastic differential To summarize, our system of equations reduces to equations in Eqs. (8) and (9) above should follow Itˆo’s Eqs. (3)–(7) and (11). The “control parameter” in sub- interpretation [37]. That is, when the value of σ jumps sequent analysis will turn out to be because of a spike in S, the value of f , which determines σ ∆ the size of the jump, is evaluated immediately before the w⋆ ≡ , (12) r jump. The effects of the two choices of fσ are identical when spike pairs are far and apart but differ when pos- which, when fixed, leaves the depression factor r as an sible spike pairs are close compared with the decay time overall control of rate of plasticity. 5

III. MEAN-FIELD APPROXIMATION

A. Single neuron response function

To formulate a mean-field approximation, we assume that the firing rates of the neurons are given by the same mean-field value λ¯ and that all the synapses have the same synaptic weightw ¯. The total synaptic conductance G of a singled-out postsynaptic neuron is a function of time and jumps whenever there is a firing of a presynaptic neuron and subsequently decays with the time constant τm as in Eq. (3). Consider the limit of large number K of afferent synapses per neuron while keeping the product wK¯ constant. Then, the total frequency of all presynap- ¯ tic firings is Kλ → ∞. In this limit, the jump sizes of Figure 1: Firing frequency λ (G) of an integrate-and-fire neu- G, being proportional tow ¯, approach 0 while the rate of ron under constant total synaptic conductance G given by making the jumps diverges. For fixed λ¯ andwK ¯ , we may Eq. (16). Also shown are mean-field active synaptic trans- thus ignore fluctuations and consider G as a constant over mitter fractions Y (λ) (for K = 31,w ¯ = 0.01) driven by a time. Poisson spike train of frequency λ given by (17). The dashed For a constant total synaptic conductance G, the fir- line represents a numerically calculated correction to Y (λ) ing frequency of an integrate-and-fire neuron can be due to the periodic firing of the neuron when driven by con- solved exactly by setting the initial membrane potential stant total synaptic conductance G. The parameters are listed to V (0) = V , the reset value, and finding the time τ in Table I. The arrow indicates that the curve shifts to the r right for increasingw ¯. The dot-dashed line is the neuron re- it takes to reach the firing threshold V (τ) = Vth. The sponse function numerically calculated for the Morris–Lecar solution of Eq. (3) is given by model used in [34] with a slightly higher background current.

τ V˜ (G) − V τ = − m ln th (13) G +1 V˜ (G) − V r ! Table I: Values of parameters used in calculations integrate & fire TUM model when V˜ (G) ≥ V , where th resting potential V0: -55 mV decay time τD: 20 ms leak time τm: 20 ms recovery time τR: 200 ms V0 + GR V˜ (G) ≡ (14) firing threshold Vth: -54 mV release fraction u: 0.5 G +1 reset potential Vr: -80 mV noise frequency λN : 1Hz R is the resting value for the membrane potential under the reversal potential : 0 mV plasticity rate r: 0.01 constant conductance G. For V˜ < Vth, we have τ = ∞ since the neuron never fires. In addition to firings due to models, such as, the Hodgkin–Huxley, Morris–Lecar, or crossing the threshold, the period τ can be cut short by FitzHugh–Nagumo models (see, e.g., [11, 39]). (An ex- Poisson noise of frequency λN as the membrane poten- ample of a numerically calculated response function for tial sweeps from Vr to Vth. Given Poisson statistics, the a Morris–Lecar neuron is included in Fig. 1.) average intervalτ ¯ between all firings (either threshold-

crossing or noise-driven) is given by ¢ ¢ τ ∞ B. Synapse response function −λN t −λN t τ¯ = dtλN te + τ dtλN e 0 τ To obtain the mean-field active transmitter fraction Y¯ −1 −λN τ = λN 1 − e , (15) of a neuron, we further approximate the firing of the neu- rons as having Poisson statistics described by the mean- and the firing frequency λ of the neurons is given by field firing rate λ¯ [17]. Keeping Y and Z as the inde- − λN τm 1 pendent variables for the TUM model, the stochastic dy- G+1 V˜ (G) − Vth namics (6) can be averaged over an ensemble of Poisson λ (G)= λN 1 − (16)  V˜ (G) − V  spike trains with hSi = λ for the presynaptic neuron to r ! yield the functions   for V˜ ≥ V , and by λ = λ otherwise. (See Fig. 1.) The uτDλ th N Y (λ)= (17) function λ (G) obtained in (16) characterizes the mean- 1+ u (τD + τR) λ field response of the integrate-and-fire network model in τR the current study [45], but similar functions can be ob- and Z (λ)= τD Y (λ) for the average active (Y ) and inac- tained analytically or numerically for different neuron tive (Z) transmitter fractions. The function Y (λ) char- 6

Figure 3: We consider the dynamics of a single synapse of synaptic weight w in a mean-field environment.

IV. RANDOM WALK IN SYNAPTIC WEIGHT SPACE

Given the mean-field firing frequency λ¯ for a fixed mean synaptic weightw ¯, motivated by the Bethe–Peierls approximation [25], we consider a single synapse within Figure 2: Mean-field firing frequency λ¯ as a function of mean- such a mean-field environment as illustrated in Fig. 3. field synaptic weightw ¯ determined by the intersections of neu- ron response function and synapse response function as plot- The time-dependence of the synaptic weight is governed ted in Fig. 1 for K = 31. The two arrows show the region of by Eq. (8) and the simplification embodied in Eq. (11). hysteresis where two stable fixed points, the noise-dominating With the assumption of Poisson spike trains produced lower fixed point and the persistently active upper fixed point, by both the presynaptic and postsynaptic neurons, the coexist. The symbols will be used in connection with Figs. 4 dynamics of the synaptic weight maps onto an asym- and 7. metric random walk (jump process) in w-space with w- dependent jump rates and jump sizes [37]. From Eqs. (8) and (11), the synaptic weight increases by ∆ A2 = ∆ Y2 acterizes the mean-field response of a synapse [45]. The whenever the postsynaptic neuron fires and decreases by value of the active transmitter fraction for a mean-field rwB1 = rwY1 whenever the presynaptic neuron fires. As network can be obtained through the relation the presynaptic neuron is driven by the mean-field envi- ronment, its firing frequency λ2 is given by the mean-field ¯ Y¯ = Y λ¯ . (18) firing frequency λ. On the other hand, the postsynap- tic neuron is partly driven by the synapse in question  (Fig. 3), thus its firing frequency λ1 is w dependent. As calculated from the neuron response function λ (G), this C. Self-consistency condition firing rate is given by

λ = λ [(K − 1)w ¯ + w] Y¯ , (20) The total synaptic conductance of a neuron in a mean- 1 field network with K afferent synapses per neuron is given where, even though the synaptic weight w is singled out by from among the K afferent synapses, all of the presy- naptic neurons are from the mean-field environment (see G¯ = Kw¯Y¯ = KwY¯ λ¯ , (19) Fig. 3) with the active transmitter fraction Y¯ . As for the step sizes, since A depends on the firing of the presy-  2 which can be substituted into Eq. (16) to complete the naptic neuron, its value is given by the mean-field value ¯ self-consistency condition closing the set of Eqs. (16), Y . On the other hand, B1 is determined by the firing of (17), and (19). The curve for Eq. (19) is also shown postsynaptic neuron; therefore its value is also w depen- in Fig. 1, and the intersections with the neuron response dent and given by B1 = Y (λ1). As mentioned earlier, function λ (G) represent fixed points of the network ac- we set A = B = Y to reduce the number of variables and tivities, λ¯, given the mean-field synaptic weightw ¯. The computational intensiveness. In general, the system can function λ¯ (¯w) is shown in Fig. 2. The mean-field the- have different characteristic functions A (λ) and B (λ) for ory predicts a first-order phase transition and hysteresis potentiation and depression. as the mean-field synaptic weightw ¯ is varied. At low synaptic weight, the activities of the network are dom- inated by the external noise. As the synaptic weight is A. Mean-field synaptic weight increased, a pair of new fixed points emerge at higher firing frequencies. Among the three fixed points, only While the “random walk” of synaptic weight is un- the upper and the (noise-dominated) lower fixed points der the influence of the mean-field environment, a self- are stable. As the synaptic weight is increased further, consistency condition requires the average value of the the lower fixed point eventually is annihilated by the un- singled-out synaptic weight from the random walk pro- stable (middle) fixed point leaving only the upper fixed cess to be the same as the assumed mean-field value point representing higher firing activities. w¯. Assuming a normalized stationary synaptic-weight 7

Equation (8) or its simplification (11) can be viewed as a stochastic equation governing the jumps in synaptic weight space upon arrival of spikes and which allows us to calculate the drift hw (∆t) − wi v ≡ lim w(0)=w ∆t→0 ∆t = ∆Y¯ λ1 − rwY (λ1) λ,¯ (22)

where λ1 is given by Eq. (20). One also has for the dif- fusion coefficient (arising from the second moment of the synaptic weight changes)

[w (∆t) − w]2 1 D ≡ lim w(0)=w 2 ∆t→0 D ∆t E Figure 4: Average synaptic weight from the simulation of 1 2 2 1 2 2 2 a single synapse between two Poisson neurons as shown = ∆ Y¯ λ1 + r w [Y (λ1)] λ.¯ (23) schematically in Fig. 3 versus the mean-field synaptic weight 2 2 w¯ for w⋆ = 0.02. The mean-field firing frequencies λ¯ used in The dynamics of the synaptic weight distribution P (w) is the single synapse simulations are given by Fig. 2, and the then approximated for small jumps by the Fokker–Planck results for both the upper and lower fixed points (marked by equation corresponding symbols in Fig. 2) are plotted. ∂P (w) ∂ ∂2 = − [vP (w)] + [DP (w)] (24) ∂t ∂w ∂w2 distribution from the random walk, Ps (w), which isw ¯- when higher-order moments in the Kramers–Moyal ex- dependent, the self-consistency requires pansion are ignored. Consider the stationary state Ps (w)

¢ ∞ of the weight distribution. The Fokker–Planck equation (24) can be integrated once to yield w¯ = hwi≡ wPs (w;w ¯) dw. (21) 0 ∂ vPs (w)= [DPs (w)] + const., (25) The stationary synaptic weight distribution, following ∂w the stochastic dynamics (11), can not be easily deter- which is formally solved by

mined even with the assumption of Poisson spike trains. ¡ w ′ ′ ′ 1 dw v(w )/D(w ) However, straightforward numerical simulations of a sys- Ps (w) ∝ e . (26) tem of a single synapse and two Poisson neurons can be D used to obtain the stationary distribution P (w;w ¯) to The peak positions of the distribution (26),w ˆ, (if they s ′ any reasonable precision. Figure 4 shows typical plots exist) are given by v (ˆw) = D (ˆw). Since the diffusion of the average synaptic weight from the random walk coefficient (23) is one-order higher in the plasticity rate process as a function of the mean-field synaptic weight r compared to the drift (22), under the small jump ap- w¯. Notice that the symbols are lower (higher) than the proximation, the peak positions are given approximately dashed line on the left (right) meaning that a deviation of by the zeros of v. Whenw ¯ = w⋆ ≡ ∆/r, we can ver- w¯ from w⋆ results in a smaller deviation of hwi from w⋆. ify that the drift (22) is zero and thus the distribution Hence, thew ¯ = w⋆ is a stable solution for the condition Ps (w) should peak at w =w ¯. We can expand v and D (21). in powers of w − w¯ and keep only the first order terms. The exponent of Eq. (26) becomes

¢ w ⋆ ′ ⋆ 2 ′ w − w − (w − w) B. Fokker–Planck equation dw = + const., (27) rY¯ w⋆2 2rY¯ w⋆2

Since the synaptic weight dynamics (8) [or (11)] only which results in a Gaussian distribution for Ps (w). The depend on the current weight of the synapse (a Markov distribution is very narrow; the width of the distribution process), we can approximate the dynamics of the weight is given by distribution for small jumps with a master equation ig- ∆w ≃ rY¯ w⋆ (28) noring higher-order moments in the Kramers–Moyal ex- pansion (see, e.g., [37]). A Fokker–Planck equation for- which corresponds to onlyp about 1% of w⋆ for the pa- malism has been applied to populations of neurons in rameters we have considered in Table I above. Simula- previous studies of the distributions in membrane poten- tions for a fully connected network using the same model tial [40, 41, 42]. Here we apply this type of formalism to and parameters yield a width about five times larger but a population of plastic synapses similar to the work by with the same scaling power in r in the noise-dominated Rubin, Lee, and Sompolinsky [17]. regime [43]. We will return to this point below. 8

C. Stability analysis within mean-field theory the parameters we have considered, the solutionw ¯ = w⋆ remains stable for large w⋆ as long as K > 1. However, ⋆ The condition v (ˆw) = 0 for the peak positionw ˆ of the it is straightforward to verify from Eq. (35) thatw ¯ = w ⋆ synaptic-weight distribution gives rise to can not remain a stable solution at large w for mod- els in which, for example, λ (G) saturates for large G or Y (λ) increases linearly (when it no longer represents a ∆Y¯ λˆ1 = rwYˆ λˆ1 λ,¯ (29) fraction) with λ for large λ. Under these situations, this   analysis suggests that one may find runaways or pile-ups ˆ where λ1 = λ1 (ˆw). Using the mean-field response func- of the synaptic weights in the system. tion (17) for the synapses, we get

1+ u (τ + τ ) λˆ wˆ = w⋆ D R 1 . (30) V. CORRECTION TO THE MEAN-FIELD 1+ u (τD + τR) λ¯ THEORY It is straightforward to verify that when the mean-field ⋆ In the mean-field considerations, we characterized all synaptic strength is given byw ¯ = w , the peak position ¯ is given byw ˆ =w ¯, providing a self-consistent solution. neurons with a single firing frequency λ, all synapses with ¯ To determine the stability of the solution, we start with a constant active transmitter fraction Y , and the “net- a small deviation δw¯ ofw ¯ from w⋆ and find the resultant work” by a constant strengthw ¯. This is an over sim- deviation δwˆ of the peak positionw ˆ fromw ¯. Assum- plification even within the mean-field approach. We re- ing the neuron response function λ (G) is smooth around tain the characterization of the environment by a single the mean-field total conductance G¯ = Kw¯Y¯ , the firing w¯. However, a neuron within a network experiences, in- frequency of the postsynaptic neuron can be expanded stead of a constant total synaptic conductance, the bom- bardment of synaptic conductance pulses issuing from ′ the spike trains of corresponding presynaptic neurons. λˆ1 ≈ λ¯ + λ G¯ δwˆY¯ (31) This shot-noise-like fluctuation is most prominent when where λ′ is the first derivative of the mean-field response the amplitude of the pulses is comparable with the mean function λ (G). The mean-field response function Y (λ) of the total synaptic conductance. When the number K for the active transmitter can also be expanded as of afferent synapses per neuron and the total frequency λtotal of presynaptic events are small, the mean level of ′ ′ Y (λ1)= Y¯ 1+ Y λ¯ λ G¯ δwˆ . (32) the total synaptic conductance is comparable with the amplitude of each individual pulse. Consequently, the The condition v (ˆw) = 0 results in   assumption of a constant total synaptic conductance will be inadequate. In the mean-field approximation above, we also as- ′ Y¯ ′ w¯−w⋆ = δw¯ = w⋆λ G¯ − Y λ¯ − 1 δw.ˆ (33) sumed that the neurons fire following Poisson statistics. λ¯     However, when integrate-and-fire neurons are driven by   For thew ¯ = w⋆ solution to be stable, δwˆ and δw¯ should a constant total synaptic conductance (as in the limit of be of opposite signs so that the perturbation can be large K and λtotal), the time between threshold crossings damped. Thus, we need will also be constant, and the resulting spike trains of the neuron will be periodic. Combined with the Poisson ex- ′ Y¯ ′ ternal noise, the intervals between firings will follow a w⋆λ G¯ − Y λ¯ < 1, (34) λ¯ Poisson distribution only up to the period of threshold     crossings, where there will be a δ-function representing or, for given total synaptic conductance G¯, the periodic firings. Nonetheless, Poisson and periodic firings result in the same mean level of active transmit- −1 ′ Y¯ ′ ter fraction in the limits of low and high firing frequen- w⋆ < w⋆ G¯ ≡ λ G¯ − Y λ¯ . (35) b λ¯ cies, while the corrections in the intermediate frequency    regime remains insignificant for the TUM model that we    For the integrate-and-fire model we have considered, in considered (see the dashed line in Fig. 1). We will thus ′ the noise-dominated regime λ = λN and λ = 0, so the retain the assumption of Poisson firing for the following condition (35) is satisfied. In the large G¯ limit, the firing discussion. frequency λ increases linearly with G¯ while the trans- mitter fraction Y for the TUM model saturates leading ⋆ ¯ A. Effect of fluctuation in total synaptic to a linear wb G . For self-consistency, the membrane conductance is given by G¯ = Kw⋆Y¯ for thew ¯ = w⋆ conductance  solution, and it is thus left to the constant K (represent- ing the number of connections per neuron) to determine Apart from making an analytically tractable system, a whether the solution will remain stable. For the range of mean-field approach does not necessarily require approx- 9

Figure 6: Numerical mean-field phase diagram (firing fre- Figure 5: Extended space of Fig. 1 when the fluctuation quency λ¯ versus synaptic weightw ¯) for various numbers K (as in synaptic conductance G is considered (represented by the labeled in legend) of afferent synapses per neuron under fluc- jump size J). The extended neuron response function (38), tuating total synaptic conductance (drawn lines) compared the synapse response function Y `λ¯´ = G/¯ (Kw¯), and the re- with the analytical results when the total synaptic conduc- lation (39) determine the stationary states of a mean-field tance G = G¯ is assumed to be constant (dashed line). network (marked by a circle on the surfaces).

as plotted in Fig. 5; compare to the line representing imating the total synaptic conductance as a constant in λ (G) as plotted in Fig. 1. The jump size J is an estima- time. Here, we consider the total synaptic conductance tor of the fluctuations in the total synaptic conductance G as a time dependent function and model the dynamics G (t), and in the J = 0 limit, the two-parameter response of G approximately with function λout G,¯ λ reduces to the single parameter re- dG G sponse function λout G,¯ 0 = λ G¯ .  = − + JStotal, (36) As we are considering a mean-field network with K dt τD afferent synapses per neuron, the total frequency of where Stotal is a Poisson spike train with the frequency presynaptic events for a postsynaptic neuron is given by λtotal accounting for any presynaptic events, and the λtotal = Kλ¯, where λ¯ is the mean-field firing frequency jump size J is a constant modeling the increase of the for any neuron. Combining with the relation (37), this total synaptic conductance due to each spike in Stotal. gives us the condition (In the spirit of mean-field theory, we simplify the dy- namics of G by assuming a constant jump size J. In gen- G¯ = λ,¯ (39) eral, the jump size of the total synaptic conductance for Kτ J each presynaptic event is a random variable depending D on the active transmitter fraction of the firing presynap- as plotted in Fig. 5, forming a planar surface in the tic neuron.) The average total synaptic conductance G¯ logarithmic-scale G¯–J–λ¯ space. is related to frequency λtotal and jump size J by Whatever the value of J, the synaptic response func- tion Y λ¯ determines the mean total synaptic conduc- G¯ = λtotalτDJ, (37) tance G¯ = KwY¯ λ¯ when λ¯ is given for the mean-field  and the ensemble of G (t) can be characterized by two network. The inverse  independent parameters from the set λtotal, J, and G¯. ¯ − G¯ Here, we choose G and J to characterize the ensem- λ¯ = Y 1 , (40) ble, and the resulting ensemble-averaged firing frequency Kw¯   λout G,¯ J of the post-synaptic neuron is a generalization of the response function λ G¯ considered in Section III. also plotted in Fig. 5 with Eq. (19), represents the con-  To evaluate λ G,¯ J for given values of G¯ and J, vex surface invariant along the J-axis. Along with the out  we simulate the dynamics (36) with a Poisson spike train surfaces (38) and (39), the intersections of the three sur- of frequency λ inferred  from Eq. (37) to obtain the faces determine the fixed points of the system (marked total with a circle in Fig. 5). While the neuron response func- time-dependent total synaptic conductance G (t), which ¯ we use in the dynamics of the membrane potential (3); tion λout G,J is evaluated numerically, we determine the measured average firing frequency of the postsynaptic the fixed points in the mean-field firing frequency nu- merically from the intersections of the three surfaces in neuron gives us the value of λout G,¯ J . The process is repeated to generate the surface Fig. 5, varying the mean synaptic weightw ¯ for a given  number of afferent synapses K per neuron. The resulting λ¯ = λout G,¯ J (38) phase diagram is plotted in Fig. 6 for various K values.  10

The hysteresis vanishes at K = KC ≈ 10, and the transi- tion from noise dominance to persistent activity becomes continuous. In our case, the fluctuation comes from the fact that the number of afferent synapses is finite. We expect that, in a real biological network, other sources of fluctuations can play a similar role to smooth out the first-order phase transition separating a noise dominated regime from persistent activity as predicted by the ana- lytical mean-field theory of Section III.

B. Correction to synaptic weight distribution

Besides smoothing out the first-order phase transition in a mean-field network, the fluctuation in total synaptic Figure 7: Numerical results (symbols) for the width of conductance can also influence the stationary distribu- synaptic-weight distribution, ∆w, as a function of plastic- tion of synaptic weights. We have noted that the width ity rate, r, in a limited system of a single synapse between of the synaptic weight distribution predicted by the sim- two neurons with Poisson spike trains following the dynamics ⋆ plest mean-field approach (about 1% of w ) is too small (11) and operating at three of the fixed points predicted by in comparison with results of simulations on fully con- the mean-field theory (as marked by corresponding symbols nected plastic integrate-and-fire networks (about 5% of in Fig. 2). The results are compared with the analytical pre- w⋆ [43]). One of the approximations we made in arriving dictions (lines of corresponding colors) from Eq. (42) (drawn at Eq. (27) is to replace the time-dependent Y (or the lines) and Eq. (28) (dashed line). fractions A and B that it represents) with its average value. Such an approximation can be improved by con- sidering the second moment of Y in deriving the diffusion of a plastic network. Here we have shown that a signifi- coefficient D in Eq. (23). We expect the fluctuation in cant correction can be accounted for by considering the Y to be important when the mean-field firing frequency fluctuation in the neural transmission for a limited sys- is low. In this limit the time-dependent Y (t) is a sum of tem of a single synapse between two neurons with Poisson pulses spike trains. Even without an analytical solution, such a system can be easily simulated to any desirable accuracy −(t−ti)/τD Y (t)= u θ (t − ti) e , (41) to obtain the stationary synaptic weight distribution as i well as the average synaptic weight hwi, which provides X the self-consistency condition (21) for the mean-field the- where θ (t) is a step function with θ (t ≥ 0) = 1 and ory. θ (t< 0) = 0, {ti} are the firing times of the neuron, and Here we have preserved the basic structure of the mean hY i = uτDλ. When the overlap of the pulses can be mean-field approach under the additional consideration ignored, for example, in the noise dominated regime, the of fluctuations in synaptic conductance by extending the 2 2 mean square of Y is given by Y = u τDλ/2, which neuron and synapse response functions to functions of should replace the squares of Y in the diffusion coeffi- two variables. The additional degree of freedom, the av- cient (23). This leads to a width of the synaptic weight erage jump size J of active transmitter fractions, was distribution of fixed by the condition (39) coming from the nature of such a fluctuation and does not enter the final result (42) 2 hY i ⋆ ⋆ explicitly. ∆w ≃ r ¯ w = ur/2w (42) r Y p instead of Eq. (28). For the values of the parameters we have considered, this represents about 5% of w⋆ and is VI. SUMMARY AND CONCLUSION similar to the simulation results from a fully connected network [43] in the noise-dominated regime. We have Models of plastic neural networks can generally be bro- verified the analytical result (42) through numerical sim- ken down into three parts: (i) the modeling of the dynam- ulations of a random walk process for the synaptic weight ics of the neuron state (represented by the membrane po- described by the dynamics (11). As shown in Fig. 7, the tential), (ii) the modeling of synaptic transmission, and scaling in plasticity rate r continues to be described by (iii) the plasticity rules. We have applied a mean-field ∆w ∼ rα with α ≃ 1/2 in all cases, while the amplitudes theory that follows this basic structure. In the mean- match well for the noise-dominated fixed point but show field framework of the current study, we reduce the dy- small deviations when the system is persistently active. namics of the neuron state to a response function λ (G) In general, we can expect fluctuations to play an impor- representing the mean firing frequency of a neuron when tant role in determining the synaptic weight distribution it is driven by a constant total synaptic conductance G. 11

Similarly, the dynamics of synaptic transmission is also The Fokker–Planck analysis of the corresponding ran- reduced to a characteristic response function Y (λ) rep- dom walk process predicts a narrow Gaussian distribu- resenting the mean fraction of active neural transmitter tion for the synaptic weight centered around w⋆, the con- given the Poisson firing frequency λ of the presynaptic trol parameter (12) entering the plasticity rules. [See, neuron. With the mean-field synaptic weightw ¯ of a e.g., Eq. (11).] We apply a small perturbation to the network with K afferent synapses per neuron and the hwi =w ¯ = w⋆ solution, analyze its stability analytically mean-field firing frequency λ¯, the average total synaptic and numerically, and find it to be stable for any number conductance is given by G¯ = KwY¯ λ¯ . This allows one of afferent synapse per neuron K ≥ 1 for the model and to plot the synapse response function Y (λ) along with parameter ranges we considered. However, the analyti- the neuron response function λ (G) (see Fig. 1) to deter- cal expression (35) for the stability, which is generic for mine self-consistently the mean-field firing frequency λ¯ any neuron and synapse response functions, also suggests as a function ofw ¯ and K. that for models in which the neuron firing rate λ (G) satu- The mean-field firing frequency λ¯ and the mean synap- rates for large conductance G (for example, models with tic weightw ¯ characterize the mean-field network com- a refractory period for the firing of neurons) or where pletely. We then use this mean-field network as an en- the synapse response function Y (λ) does not saturate vironment and investigate the dynamics of spike-timing- for large λ, (for example, when the effect of presynaptic dependent plasticity (8) of a single synapse in such an firings is additive and Y no longer represents a fraction), ⋆ environment. Assuming Poisson statistics for the spike the fixed pointw ¯ = w can not remain stable for large ⋆ trains, the dynamics (8) can be viewed as describing a w . It is then possible to have runaway or pile-up in the type of random walk in synaptic weight space, where the resulting synaptic weight distribution. frequency of potentiation and the jump size of depression For a network of finite K and low overall frequency are dependent on the weight of this synapse. The distri- λtotal of presynaptic events, the “shot noise” due to the bution of synaptic weights can be calculated numerically discrete nature of presynaptic spikes is not negligible, for the stationary state using straightforward simulations and one at least needs to expand the description of the of the single synapse system. Under the approximation mean-field response of a neuron to include temporal fluc- of small jumps, the stationary distribution can be calcu- tuations. We have done this approximately via a two- lated analytically from the Fokker–Planck equation aris- variable function, e.g., λout G,¯ J , where J, represent- ing from the master equation. The self-consistency of the ing fluctuations, describes the size of the jumps in a mean-field approach is completed by requiring that the neuron’s total synaptic conductance  for each presynap- mean of the synaptic weight distribution hwi reproduces tic event. We evaluate this two-variable response func- the mean-field synaptic weightw ¯ of the environment. tion numerically for a single neuron, modeling its total We have considered a network of integrate-and-fire synaptic conductance with a simple stochastic jump-and- neurons [35] coupled through synapses with TUM dy- decay process (36). The results suggest the disappear- namics [27] within the mean-field framework outlined ance of the first-order transition when the number, K, of above. We chose integrate-and-fire neurons for their sim- afferent synapses per neuron is less than a critical value plicity and wide use. We chose the TUM model of neu- KC ≈ 10. In realistic situations fluctuations can be ex- ral transmission for its features and flexibility as noted pected to smear out the first-order transition. When a above. On the simplest level for a static network, our corresponding fluctuation is considered for the variable mean-field approach amounts to finding intersects of the Y governing jump sizes in the plasticity rules (11), the neuron response function and synapse response function, Fokker–Planck approach predicts a broader Gaussian dis- each of which can be calculated separately from the par- tribution which has a width similar to the observed width ticular neuron model and synapse model selected. Our in full network simulations [43]. The results of extensive choice of model happens to allow us to find analytical simulations on a fully connected plastic network will be forms of the mean-field response functions for the neu- published elsewhere. rons and synapses, and predict a first-order phase tran- The inclusion of fluctuations at some level, such as sition from a noise-dominated regime to a regime of per- within our extended mean-field theory (see Sec. V), hints sistent activity as the mean-field synaptic weight is in- at a possible critical state of the system at the endpoint creased. (However, see below.) In general, these response of a first-order transition line in analogy with the vapor functions can be computed numerically in a straightfor- pressure curve of a fluid, see, e.g., [25]. While critical- ward fashion for a variety of neuron and synapse models ity in neural avalanches has been observed by Beggs and regardless of the number of empirical parameters they Plenz [31], within the extended mean-field analysis em- might carry. In Fig. 1, we showed the results of such cal- ployed in the current study the system does not appear culation for the Morris–Lecar neuron used in [34] which to organize into such a critical state without a requisite was defined by more than twenty parameters. tuning of the fluctuation-amplitude and perhaps plastic- For the dynamics of the synaptic weight, for speci- ity parameters, e.g., w⋆. This is in contrast with the ficity, we follow the spike-timing-dependent plasticity suggestion that such criticality should be self-organized rules proposed by van Rossum, Bi, and Turrigiano [22], [31, 44]. It thus will be of great interest to find miss- with additive potentiation and multiplicative depression. ing elements, possibly in the plasticity rules, that could 12 dynamically push the network close to a critical state. consisting of hundreds of neurons with virtually “all-to- Along this direction, one can expect a variety of model all” interactions [28, 29, 30, 31, 32]. The studies of these candidates, which can be easily subjected to the type networks in vitro is an important steppingstone towards of mean-field scheme outlined in the current study to the understanding of more complicated networks. The provide additional qualitative and semi-quantitative in- dynamics of these cultured network are on a scale very sight into their plausibility. We also note in passing that much accessible to current neural network modeling, such “all-to-all” network simulations using the plasticity rules as the one presented here, and simulation, as noted, to and neuron modeling described in this paper have not be presented elsewhere. revealed evidence for a self-organized critical state. In that regard, sparse networks would appear to be better suited, but mean-field analysis such as presented here will mainly have only qualitative use. Our simulations of Acknowledgments integrate-and-fire and other neuronal networks with the plasticity rules used here will be presented elsewhere. We are grateful for helpful discussions and insights As noted in the Introduction, the current study cannot from Profs. Guo-Qiang Bi, Jonathan E. Rubin, G. begin to address how brains function or form. As a real Bard Ermentrout, and Dr. Richard C. Gerkin dur- brain is never uniform or stationary, we do not expect ing the course of our work. This research was sup- the model systems presented here to address or repro- ported in part by Computational Resources on PittGrid duce its dynamics. However, there are cultured networks (www.pittgrid.pitt.edu).

[1] T. P. Vogels, K. Rajan, and L. Abbott, Annual Review H. Wagner, Nature 383, 76 (1996). of 28, 357 (2005). [21] R. Kempter, W. Gerstner, and J. L. van Hemmen, Phys- [2] M. I. Rabinovich, P. Varona, A. I. Selverston, and H. D. I. ical Review E 59, 4498 (1999). Abarbanel, Reviews of Modern Physics 78, 1213 (2006). [22] M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, [3] L. F. Abbott and S. B. Nelson, Nature Neuroscience 3, Journal of Neuroscience 20, 8812 (2000). 1178 (2000). [23] W. M. Kistler and J. L. van Hemmen, Neural Computa- [4] P. D. Roberts and C. C. Bell, Biological Cybernetics 87, tion 12, 385 (2000). 392 (2002). [24] J. B. Angevine and C. W. Cotman, Principles of Neu- [5] Y. Dan and M.-M. Poo, Neuron 44, 23 (2004). roanatomy (Oxford University Press, USA, 1981), 1st ed. [6] D. J. Amit and N. Brunel, Network: Computation in [25] K. Huang, Statistical Mechanics (John Wiley and Sons Neural Systems 8, 373 (1997). (WIE), 1988), 2nd ed. [7] R. C. Malenka, , and R. A. Nicoll, Science 285, 1870 [26] A. Burkitt, Biological Cybernetics 95, 1 (2006). (1999). [27] M. Tsodyks, A. Uziel, and H. Markram, Journal of Neu- [8] M. A. Lynch, Physiological Reviews 84, 87 (2004). roscience 20, 50RC (2000). [9] G. L. Collingridge, J. T. R. Isaac, and Y. T. Wang, Na- [28] G.-Q. Bi and M.-M. Poo, Journal of Neuroscience 18, ture Reviews Neuroscience 5, 952 (2004). 10464 (1998). [10] D. O. Hebb, The Organization of Behavior: a Neuropsy- [29] R. Segev, Y. Shapira, M. Benveniste, and E. Ben-Jacob, chological Theory (Wiley, New York, 1949). Physical Review E 64, 011920 (2001). [11] W. Gerstner and W. M. Kistler, Spiking Neuron Mod- [30] O. Shefi, I. Golding, R. Segev, E. Ben-Jacob, and A. Ay- els: Single Neurons, Populations, Plasticity (Cambridge ali, Physical Review E 66, 021905 (2002). University Press, 2002). [31] J. M. Beggs and D. Plenz, J. Neurosci. 23, 11167 (2003). [12] E. M. Izhikevich, Dynamical Systems in Neuroscience: [32] P.-Y. Lai, L. C. Jia, and C. K. Chan, Physical Review E The Geometry of Excitability and Bursting (The MIT 73, 051906 (2006). Press, 2006), 1st ed. [33] G. A. Baker, Quantitative theory of critical phenomena [13] G.-Q. Bi and M.-M. Poo, Annual Review of Neuroscience (Academic Press, Boston, 1990). 24, 139 (2001). [34] V. Volman, R. C. Gerkin, P.-M. Lau, E. Ben-Jacob, and [14] N. Caporale and Y. Dan, Annual Review of Neuroscience G.-Q. Bi, Physical Biology 4, 91 (2007). 31, 25 (2008). [35] P. Dayan and L. F. Abbott, Theoretical Neuroscience: [15] A. Amarasingham and W. B. Levy, Neural Computation Computational and Mathematical Modeling of Neural 10, 25 (1998). Systems (Massachusetts Institute of Technology Press, [16] S. Song, K. D. Miller, and L. F. Abbott, Nature Neuro- Cambridge, Mass, 2001). science 3, 919 (2000). [36] J.-P. Pfister and W. Gerstner, Journal of Neuroscience [17] J. Rubin, D. D. Lee, and H. Sompolinsky, Physical Re- 26, 9673 (2006). view Letters 86, 364 (2001). [37] N. G. van Kampen, Stochastic Processes in Physics and [18] T. Toyoizumi, J.-P. Pfister, K. Aihara, and W. Gerstner, Chemistry (Elsevier, 2007). Neural Computation 19, 639 (2007). [38] R. C. Froemke, I. A. Tsay, M. Raad, J. D. Long, and [19] K. I. Blum and L. F. Abbott, Neural Computation 8, 85 Y. Dan, Journal of Neurophysiology 95, 1620 (2006). (1996). [39] E. Izhikevich, IEEE Transactions on Neural Networks 15, [20] W. Gerstner, R. Kempter, J. L. van Hemmen, and 1063 (2004). 13

[40] N. Fourcaud and N. Brunel, Neural Computation 14, Applications 340, 756 (2004). 2057 (2002). [45] In this article, the symbols λ and Y , when used as a [41] N. Brunel and D. Hansel, Neural Computation 18, 1066 function, e.g., λ (G) or Y `λ¯´, always represent the re- (2006). sponse functions of the neuron and synapse respectively. [42] H. Cateau and A. D. Reyes, Physical Review Letters 96, Otherwise, they are only variables standing for a firing 058101 (2006). frequency or an active transmitter fraction. [43] C.-C. Chen and D. Jasnow, to be published. [44] D. R. Chialvo, Physica A: Statistical Mechanics and its