Theoretical :

Single dynamics and computation

STAT 42510 - CPNS 35510

Instructor: Nicolas Brunel [email protected]

TA: Max Gillett [email protected] Practical details

First course in a 3-course series:

• Single neuron dynamics and computation (Fall, STAT 42510 - CPNS 35510) Instructor: Nicolas Brunel

• Network dynamics and computation (Winter, STAT 42520 - CPNS 35520) Instructor: Nicolas Brunel

• Statistics and information theory (Spring, STAT 42600 - CPNS 35600) Instructor: Stephanie Palmer Practical details

• Problem sets will be issued every Tuesday from Week 2 to Week 9 (total of 8), due the next Tuesday in class.

• A take-home final will be issued on December 1, due on December 15 in my office (Jones 116A)

• Grades: Problem sets will account for 50% of the grade; The take-home final for the remaining 50%.

• TA: Max Gillett ([email protected])

• Office hours: Thursday 5-6pm, Jones 308 (Max Gillett); Friday 4-5pm, Jones 116A (Nicolas Brunel) Introduction

• Brains, scales, structure, dynamics

• What is computational/theoretical neuroscience?

• Brief history of computational/theoretical neuroscience

• Outline of the course The human brain: a network of 1011 connected by 1015 synapses Brain networks: from C Elegans to Humans

Animal # neurons # synapses

C Elegans 302 5,000

Fruit fly 105 107

Honey bee 106 109

Mouse 108 1012

Cat 109 1013

Human 1011 1015 Brain networks: from sensory to motor Spatial scales

∼10cm Whole brain

∼1cm Brain structure/cortical area

100µm- 1mm Local network

10µm- 1mm Neuron

100nm- 1µm Sub-cellular compartment

∼1-10nm Molecule Experimental tools The whole brain level: structures Cerebral cortex: network of areas Area level

• Areas are interconnected networks of local networks (‘columns’) Local network level

• Size ∼ cubic mm • Total number of synapses ∼ 109 • Total number of cells ∼ 100,000 (10,000 per neuron)

• Types of cells: • Cells connect potentially to all other cell – pyramidal cells - excitatory (80%) types (E→ E, E→ I, I→ E, I→ I) – interneurons - inhibitory (20%) • Connection probability ∼ 10% Neuron level

• Neuron = complex tree-like structures with many compartments (e.g. dendritic spines) Subcellular compartment level

• Subcellular compartments (e.g. dendritic spines) contain a huge diversity of molecules (in particular protein kinases and phosphatases) whose interactions define complex networks Temporal scales

Days-Years Long-term memory

Seconds-Minutes Short-term (working) memory

100ms - 1s Behavioral time scales/Reaction times

∼ 10ms Single neuron/synaptic time scales

∼ 1ms duration; local propagation delays

 1ms Channel opening/closing Submillisecond

• Molecular time scales (channel opening/closing; diffusion of neurotransmitter in synaptic cleft; etc) Millisecond

• Width of action potentials; axonal delays in local networks Tens of ms

• Synaptic decay time constants; membrane of neurons; axonal delays for long-range connections Hundreds of ms

• Behavioral time scales (e.g. motor response to a stimulus) Seconds-minutes

• Short-term memory Working memory Days-years

• Long-term memory Questions in theoretical/computational neuroscience

What? Describe in a mathematically com- pact form a set of experimental obser- vations.

How? Understand how a neural system produces a given behavior.

Why? Understand why a neural system performs the way it does, using e.g. tools from information theory. Mathematical tools

• Single neuron/synapse models, rate models: systems of coupled differential equations. Tools of dynamical systems (linear stability analysis, bifurcation theory)

• Networks: Graph theory, linear algebra. Large networks: tools of statistical physics

• Noise (ubiquitous at all levels of the nervous system): Statistics, probability theory, stochastic processes.

• Coding: Information theory A few historical landmarks

Before mid-1980s: a few isolated pioneers

• 1907: Louis Lapicque (leaky integrate-and-fire neuron)

• 1940s: Warren McCulloch, Walter Pitts (binary neurons, neural networks)

• 1940s: Donald Hebb (neuronal assemblies, synaptic plasticity)

• 1940s-50s: , Andrew Huxley (model for AP generation)

• 1950s: Wilfrid Rall (cable theory)

• 1950s: Frank Rosenblatt ()

• 1960s: Horace Barlow (coding in sensory systems)

• 1960s-70s: David Marr (systems-level models of cerebellum/neocortex/hippocampus)

• 1970s: Hugh Wilson, Jack Cowan, Shun-Ishi Amari (‘rate models’)

• 1980s: John Hopfield, Daniel Amit, Hanoch Gutfreund, Haim Sompolinsky (associative memory models) A few historical landmarks

From the 1990s: establishment of a field

• First annual computational neuroscience conference in 1992 (now two main conferences/year, CNS and Cosyne, each around 500 participants per year)

• Many specialized journals created from the beginning of the 1990s (Neural Computation; Network; Journal of Computational Neuroscience; Frontiers in Computational Neuroscience; etc)

• Annual computational neuroscience summerschools since mid-1990s (Woodshole, ACCN in Europe, Okinawa, CSHL China)

• Birth of major Computational Neuroscience centers in the US (1990s, Sloan-Swartz), Israel (HU, 1990s) and Europe (Gatsby Unit, Bernstein Centers, 2000s)

• Now widely accepted as an integral part of neuroscience (theory papers routinely published in major neuroscience journals) Week 1 Sep 27 Introduction Sep 29 Biophysics, Hodgkin-Huxley (HH) model

Week 2 Oct 4 2D models for spike generation Oct 6 Ionic channels and firing behaviors

Week 3 Oct 11 From HH to leaky integrate-and-fire (LIF) neurons Oct 13 Stochastic dynamics of neurons

Week 4 Oct 18 From LIF to ‘firing rate’ and binary neurons Oct 20 No class

Week 5 Oct 25 Coding I Oct 27 Coding II

Week 6 Nov 1 Nov 3

Week 7 Nov 8 Synapses I Nov 10 Synapses II

Week 8 Nov 15 Synaptic plasticity I Nov 17 Synaptic plasticity II

Week 9 Nov 22 Nov 24 No class

Week 10 Dec 29 Dec 1 Spatial scales of the brain

∼10cm Whole brain

∼1cm Brain structure/cortical area

100µm- 1mm Local network

10µm- 1mm Neuron

100nm- 1µm Sub-cellular compartment

∼ 1-10nm Molecule Single neuron models

• Hodgkin-Huxley neuron dV X C = −IL(V )− Ia(V, ma, ha)+Isyn(t) dt a

IL(V ) = gL(V − VL)

xa ya Ia(V, ma, ha) = ga(V − Va)ma ha

dma ∞ τm (V ) = −ma + m (V ) a dt a dha ∞ τ (V ) = −ha + h (V ) ha dt a • Integrate-and-fire neuron dV C = −gL(V − VL) + Isyn(t) dt

Fixed threshold Vt, reset Vr, refractory period; • How do single neurons work?

• What are the mechanisms of action potential generation?

• How do neurons transform synaptic inputs into a train of action potentials? Coding

Tools from information theory:

• Shannon mutual information

• Fisher information

• What contains information in single neurons spike trains?

• How much information is contained in single neurons’s response?

• What is the optimal way of transmitting information about a given input, given its statistics and other constraints? Are neural systems optimal/close to optimal at transmitting information? Single synapse models

• ‘Conductance-based’

X X k Ii,syn(t) = (V − Va) ga,i,j sa(t − tj ) a=E,I j,k

τas˙a = ...

• ‘Current-based’

X X k Ii,syn(t) = Ja,i,j sa(t − tj ) a=E,I j,k

Popular choices of s – Delayed difference of exponential; – Delayed delta function

• How do synapses work?

• What are the mechanisms of synaptic plasticity on short time scales? Synaptic plasticity

Synaptic efficacy can be modified in various ways: • Spike timing • Firing rate • Post-synaptic V (STDP experiments) (BCM, Sjostrom et al, etc) (pairing, etc)

• Can we capture this experimental data using simplified ‘plasticity rules’?

• What are the mechanisms of induction of synaptic plasticity? Learning

input patterns • N synapses/inputs; synaptic A B parallel weights fibres • sums linearly its inputs; w 0 1 1 1 • emits a spike if inputs > θ ∼ 10 w 1 0 2 2 mV (threshold) w 0 1 3 3 w 1 0 • has to learn a set of p ≡ αN in- 4 4 put/output associations

Purkinje cell • by appropriate choice of its synap- threshold B tic weights wi ≥ 0. (θ ± κ) 0 in a robust way (as measured by ); output • κ A 1 • can be achieved using simple learning algorithms • How much information can a single neuron/a population of neurons store?

• What kind of learning algorithms/rules allow to reach optimal storage? Useful books

• Tuckwell, “Introduction to Theoretical Neurobiology”, Vols. I & II (Cambridge U. Press, 1988)

• Amit, “Modeling brain function: The world of attractor neural networks” (Cambridge U. Press, 1989)

• Hertz, Krogh, and Palmer, “Introduction to the Theory of Neural Computation” (Addison-Wesley, 1991 - now from: Perseus Book Group and Westview Press)

• Rieke, Warland, de Ruyter van Steveninck, and Bialek, “Spikes: Exploring the Neural Code” (MIT Press, 1997)

• Koch, “Biophysics of Computation: Information Processing in Single Neurons” (Oxford U. Press, 1999)

• Dayan and Abbott, “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems” (MIT Press, 2001)

• Izhikevich, “Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting” (MIT Press, 2007)

• Ermentrout and Terman, “Mathematical Foundations of Neuroscience” (Springer, 2010)

• Gerstner, Kistler, Naud and Paninski “Neuronal Dynamics: From single neurons to networks and models of cognition” (Cambridge U. Press, 2014)