<<

Stochastic with plastic interactions

Eugene Pecherskya,b, Guillem Viaa, Anatoly Yambartseva aInstitute of Mathematics and , University of São Paulo, Brazil. bInstitute for Information Transmission Problem, Russian Academy of Science, Russia.

2nd Workshop NeuroMat, November 25, 2016 Phenomenon: Strengthening of the synapses between co-active neurons

This phenomenon is known today as Hebbian plasticity and is a form of Long-Term Potentiation (LTP) and of activity-dependent plasticity. Martin et al. (2000) and Neves et al. (2008) review some of the experimental evidence showing that the memory attractors are formed by means of LTP.

Models for memory attractors: Hopfield (1982) proposed a model to study the dynamics of attractors and the storage capacity of neural networks by means of the Ising model (for review about results from the model see Brunel et al. (2013)). Each neuron is represented by a whose up and down states correspond to a high and a low ring rates, respectively. Then the cell assembly would be represented by the set of vertices in the network, the engram by the connectivity matrix and the attractor by the stable spin configurations. Hopfield gave a mathematical expression for the connectivity matrix that supports a given set of attractors chosen a priori. However, the phase in which such connectivity is built through synaptic plasticity mechanisms is not been considered within its framework.

To the best of our knowledge, analytical results on neural networks with plastic synapses where the learning phase is been considered are restricted to models of binary neurons with binary synapses. We could not find any analytical result on neural networks with non-binary synapses or using the Ising model with plastic interactions in the literature. Model: We present a model of a network of binary point neurons also based on the Ising model. However, in our case we consider the connections between neurons to be plastic so that their strengths change as a result of neural activity. In particular, the form of the transitions for the coupling constants resemble a basic Hebbian plasticity rule, as described by Gerstner and Kistler (2002). Therefore, it represents a mathematically treatable model capable of reproducing several features from learning and memory in neural networks.

The model combines the stochastic dynamics of spins on a finite graph together with the dynamics of the coupling constants between adjacent sites. The dynamics are described by a non-stationary continuous-time Markov process. Let 퐺 = (푉; 퐸) be a finite undirected graph without self-loops. For each vertex 푣 ∈ 푉 we associate a spin 휎푣 ∈ {−1,1} and, for each edge 푒 = (푣, 푣´) ∈ 퐸 we associate a coupling constant 퐽푒 ≡ 퐽푣푣´ ∈ ℤ. These constants are often called exchange energy constants. Here we will also use the term strength for the coupling constants. 푣 퐽푣푤 푤 휎푣 휎푤

푉 configuration of spins 흈 = 휎푣 , 푣 ∈ 푉 ∈ {−1,1} 푉 configuration of strengths 푱 = 퐽푒 , 푒 ∈ 퐸 ∈ ℤ state space 𝒜 is the set of all possible pairs of configurations 푤´ of spins and strengths 𝒜 = {−1,1}푉× ℤ푉 휎푤´ 퐽푣푤´ 푣 퐽푣푤 푤 The following functions will play a key role 휎푣 휎푤 in further definitions: weight for sign flip

휂푣 흈, 푱 = 휎푣 퐽푣푣´휎푣´ 휂푣 흈, 푱 = 퐽푣푤휎푣휎푤 + 퐽푣푤´휎푣휎푤´ 푣´:푣´~푣 푉 configuration of spins 흈 = 휎푣 , 푣 ∈ 푉 ∈ {−1,1} weight for sign flip 푉 configuration of strengths 푱 = 퐽푒 , 푒 ∈ 퐸 ∈ ℤ state space 𝒜 is the set of all possible pairs of configurations 휂푣 흈, 푱 = 휎푣 퐽푣푣´휎푣´ of spins and strengths 𝒜 = {−1,1}푉× ℤ푉 푣´:푣´~푣

Transitions rates: for given state 흈, 푱 ∈ 𝒜 1 spin flip 휎푣 → −휎푣 occurs with rate 푐푣 흈, 푱 = 1+exp (2휂푣 흈,푱 ) strength change 퐽푣푣´ → 퐽푣푣´ + 휎푣휎푣´ occurs with constant rate 휈푣푣´ 흈, 푱 ≡ 휈

Continuum time : 휉 푡 = 흈(푡), 푱(푡)

Discrete time Markov chain: 휉푚 = 흈 (푡), 푱 (푡) embedded Markov chain with transitions 푐 흈,푱 spin flip 휎 → −휎 occurs with probability 푣 푣 푣 퐷 흈,푱 휈 strength change 퐽 → 퐽 + 휎 휎 occurs with probability where 푣푣´ 푣푣´ 푣 푣´ 퐷 흈,푱

퐷 흈, 푱 = 퐸 휈 + 푐푣 흈, 푱 푣∈푉 for given state 흈, 푱 ∈ 𝒜 퐸 휈 < 퐷 흈, 푱 ≤ 퐸 휈 + |푉| Theorem 1: The Markov chain 휉푚 ( 휉 푡 ) is transient Lyapunov function criteria for transience Fayolle et al. (1995), Menshikov et al. (2017)

For a discrete-time Markov chain ℒ = (휁푚, 푚 ∈ ℕ) with state space Σ to be transient it is necessary and sufficient that there exists a measurable positive function (the Lyapunov function) 푓(훼), on the state space, 훼 ∈ Σ, and a non-empty set 퐴 ⊂ Σ, such that the following inequalities hold true

(L1) 피 푓 휁푚+1 − 푓 휁푚 휁푚 = 훼] ≤ 0, for any 훼 ∉ 퐴, (L2) there exists 훼 ∉ 퐴 such that 푓 훼 < inf 푓(훽). 훽∈퐴

Moreover, for any initial 훼 ∉ 퐴 푓(훼) ℙ 휏 < ∞ 휁 = 훼) ≤ 퐴 0 inf 푓(훽) 훽∈퐴 Theorem 1: The Markov chain 휉푚 ( 휉 푡 ) is transient.

Proof of Theorem: choose 푁 such that 푒2푁휈 ≥ |푉|(푁 + 1) then the Lyapunov function will be defined as

1 , if 휂푣 > 푁, for all 푣 ∈ 푉, 휂푣 푓 흈, 푱 = 푣∈푉 |푉| , otherwise. 푁 and the set A will be defined as

퐴 = { 흈, 푱 ∈ 𝒜: min 휂푣 흈, 푱 ≤ 푁} 푣∈푉

Then (L1) and (L2) hold true Let 휏 be the freezing time 휏 ≔ max{푚 ≥ 1: 흈 (푚 − 1) ≠ 흈 (푚)} assuming max{∅} = 0.

Theorem 2: ℙ 휏 < ∞ = 1 Proof of Theorem 2 1 , if 휂푣 > 푁, for all 푣 ∈ 푉, 휂 휂푣 2 푓 흈, 푱 = 푣∈푉 |푉| 휂푣 > 푁, for all 푣 ∈ 푉 , otherwise. 푁 𝒜 ∖ 퐴 the 퐴 was defined as 푁 퐴 = { 흈, 푱 ∈ 𝒜: min 휂푣 흈, 푱 ≤ 푁} 푣∈푉

푁 휂1

Moreover for any 흈, 푱 ∈ 𝒜 ∖ 퐴

−1 푓 흈, 푱 푣∈푉 휂푣 흈, 푱 푁 푁 ℙ 휏퐴 < ∞ 휉0 = 흈, 푱 ) ≤ = ≤ ≤ < 1 inf 푓 훽 푉 min 휂푣 흈, 푱 푁 + 1 훽∈퐴 푁 푣∈푉 1 ℙ 휏 = ∞ 휉 = 흈, 푱 ) ≥ 퐴 0 푁 + 1 Proof of Theorem 2 휂2

푠 흈, 푱 = 휂푣 흈, 푱 휂푣 > 푁, for all 푣 ∈ 푉 푣∈푉 𝒜 ∖ 퐴

If 흈, 푱 such that 휂푣 흈, 푱 < −|푉|/2 for some 푣 ∈ 푉 푁 피 푠 휉푚+1 − 푠 휉푚 휉푚 = 흈, 푱 ] ≥ 1/2 −|푉|/2 푁 휂1 −|푉|/2

If 흈, 푱 such that 휂푣 흈, 푱 < 0 then the probability of the spin flip 휂푣 → −휂푣 is at least 1/2퐷 흈, 푱 > 1/(2(휈 퐸 + |푉|))

푉 Let ℬ = 흈, 푱 : min 휂 흈, 푱 < − ⊂ 퐴. Thus if initial 흈, 푱 ∈ ℬ then 푣∈푉 푣 2 ℙ 휏𝒜∖ℬ < ∞ 휉0 = 흈, 푱 = 1 Theorem 3: As a consequence of Theorem 2, for any 푒 ∈ 퐸 almost surely 퐽 푒(푚) 1 퐽푒(푡) lim = , lim = 휈. 푚→∞ 푚 |퐸| 푡→∞ 푡

∞ ∞ ∞ Theorem 4: 푗푣푣´ = 휎푣 휎푣´ for any (푣, 푣´) ∈ 퐸 References:

1. Martin, S., Grimwood, P., Morris, R., 2000. Synaptic plasticity and memory: an evaluation of the hypothesis. Annual review of neuroscience 23 (1), 649-711.

2. Neves, G., Cooke, S. F., Bliss, T. V., 2008. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nature Reviews Neuroscience 9 (1), 65-75.

3. Hopfield, J. J., 1982. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences 79 (8), 2554-2558.

4. Brunel, N., del Giudice, 160 P., Fusi, S., Parisi, G., Tsodyks, M., 2013. Selected Papers of Daniel Amit (1938-2007). World Scientic Publishing Co., Inc.

5. Gerstner, W., Kistler, W. M., 2002. Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press.

6. Fayolle, G., Malyshev, V. A., Menshikov, M. V., 1995. Topics in the constructive theory of countable Markov chains. Cambridge university press.

7. Menshikov, M., Popov, S., Wade, A., 2017. Non-homogeneous random walks - Lyapunov function methods for near-critical stochastic systems. Cambridge university press, to appear.