NEURAL ENCODING

KAI LEI AND STEPHANIE WOROBEY

Math 390 Mathematical Biology, Fall 2015 Advisor: Viktor Grigoryan Simmons College

Introduction There are approximately 1000 billion in a human body, and each one of them are essential to our perceptions of senses. The re- sponses initiated by internal and external enable us to see, hear, smell, and touch. The major function of neurons is to transmit information throughout the nervous system. They achieve this by generating electrical pulses called action potentials or spikes and these spikes appear in various patterns. examines upon how action potentials represent stim- ulus attributes and this can be done from two opposite perspectives: neural encoding or . In particular, we are looking at neural encoding, which is the map from stimulus to response. Before introducing the algorithms that model different aspects of neural en- coding, we will go over the fundamental mechanism of how neurons communicate with each other and also the basic structures of a and their functions.

Properties of a Neuron and their Functions

[2]

Date: Fall 2015. 1 2 KAI LEI AND STEPHANIE WOROBEY

The cell body of a neuron is responsible for the biological machinery to keep the cell alive. Dendrites are fibers that project out of the cell body and they collect signals from other neurons. Signals can be classified as either excitatory or inhibitory. [2] Excitatory signals tells the neuron to generate an electrical impulse whereas inhibitory signals tells the neuron not to generate an electrical impulses. When many excitatory signals are received, the neuron reaches threshold of activation and an electrical impulse also known as is generated and would travel down the to the axon terminals.

The synapse also known as the synaptic gap is a small space between the axon terminals of one neuron and the dendrites of another neuron. The axon terminal contains sacs of which are nat- urally occurring chemicals that specialize in transmitting information between neurons [2]. Once the action potential reached the axon ter- minals, the is then released and proceeds to bind to the receptors of the receiving neuron. Next, we introduce methods of recording neuronal responses.

How to Record Neuronal Responses There are two ways to record action potentials electrically : either intracellularly or extracellulary. The extracellular method involves first connecting a neuron to electrolyte filled, glass electrode. There are then two ways to record the readings. The first is by inserting a sharp electrode into the cell. The second, by sealing patch electrodes to the surface of the membrane of the cell. The seal of the patch electrode causes the membrane to break, and then the tip of the patch electrode NEURAL ENCODING 3

[3]

Figure 1. These are three simulated responses from a neuron. The top one shows an intracellular electrode reading from the soma. The bottom reading shows an intracellular reading from an electrode connected to the axon. The middle trace is an extracellular reading, in which no subthreshold potentials are present. has access to the interior of the cell. It is observed that the subthresh- old membrane potentials can be seen in the soma of neurons, but is not observable in the axon – therefore spikes, but not subthreshold potentials reproduce regeneratively down . In extracellular readings, the electrode never pierces the membrane of the cell. The electrode is just placed near the neuron. These kinds of recordings, however, can only record action potentials, and they are incapable of recording subthreshold potentials.

Complication of Neural Coding It is often difficult to demonstrate the relationship between stimulus and response because neuronal responses can vary significantly trail from trial even if when the stimulus that’s presented is the same dur- ing every trial. The potential factors that vary the responses includes levels of arousal or attention, effects of various biophysical or cognitive processes, etc. For example, when the same person brushes one’s arm 4 KAI LEI AND STEPHANIE WOROBEY repeatedly at the same spot, the person being brushed might still feel different every time. This complexity of variations contributes to the unlikelihood of de- termining and predicting when an action potential would occur. And the model we are presenting below ”accounts for the that different spike sequences are evoked by a specific stimulus” [3]. In other words, we are presenting a probabilistic model that would counteract the stochastic nature of neuronal responses. Also we need to take in account of the fact that whenever a stim- ulus is presented, there are usually more than one neuron that would respond. Therefore, aside from investigating the firing pattern of one particular neuron, we also need to look at how these firing patterns relate to each other.

Firing Rates The first function in this model is the neural response function. This function is constructed under 3 assumptions. The first assumption is that even though action potentials acts differently depending on their duration, amplitude and shape, we are going to treat them as iden- tical stereotyped event. The second assumption is that an action potential sequence can be represented by a list of the times when the spikes occurred because the timing determines how and when a spike transmits information. So for n number of spikes, we use the notation ti to represent the spike times. The last assumption is that during each trial that the spikes are recorded, we start at time 0 and end at time T , so that puts ti in the interval between 0 and T , inclusively. Based on these 3 assumptions, the spike sequence is represented as a sum of idealized spikes using the Delta function as followed:

n X (1) ρ(t) = δ(t − ti) i=1 Delta function is a generalized function on the real number line that is zero everywhere, except at 0, with an integral of one over the entire real line. This function is used because when we model it, its physical nature mirrors what a spike would look like- ”it is sometimes thought of as an infinitely high, infinitely thin spike at the origin”. ρ(t) is the neural response function and we can use it to re-express sums over spikes as integrals over time. For well-behaved function h(t), we have the following function: NEURAL ENCODING 5

n X Z T (2) h(t − ti) = h(τ)ρ(t − τ)dτ i=1 0 where the integral is expressed over the length of the trial. Using the basic definition of a δ function, we get: Z (3) δ(t − τ)h(τ)dτ = h(t)

provided that the limits of the integral surround the point t and if they do not, the integral is zero. Well behaved functions are functions that do not violate the 3 assumptions mentioned above. Neuronal responses are treated probabilistically because action po- tentials generated by the same stimulus can vary trial from trial. If we seek the for a spike to occur at any specified time, we would get a 0 value because spike times are continuous variables. Instead, we seek the probability for a spike to occur over a specified time interval between time t and t + ∆t. Furthermore, we use the notation P [ ] to represent probabilities and p[ ] to represent probability densities. We use angle brackets, hi to represent average over trials given the same stimulus. Applying these notations, we use p[t]∆t to represent the probability that a spike occurs between times t and t + ∆t, where p[t] is the single spike probability density. The quantity p[t] can also be defined as the firing rate of the cell, and we use r(t) to denote it. A way to approximate r(t) is to determine the fraction of trials with a given stimulus on which a spike occured between the times t and t + ∆t. For small ∆t and large numbers of trials, this method produces a good approximation under the Law of Large Numbers, which states that: as the number of trials gets larger, the relative frequency approximation of P(A) gets closer to the theoretical value. [1] Next, we use hρ(t)i to represent the trial-averaged neural response function, which produces the fraction of trails on which a spike occurs. Using this relationship, we get the following:

Z t+∆t (4) r(t)∆t = hρ(t)idτ t And for well behaved functions h, we replace hρ(t)i with r(t) and we have: 6 KAI LEI AND STEPHANIE WOROBEY

Z Z (5) h(τ)hρ(t − τ)idτ = h(τ)r(t − τ)dτ

This function is important because it demonstrates the relationship between hρ(t)i and r(t). Another important quantity in neural encoding is the spike-count rate r and we get this value by ”counting the number of action poten- tials that appear during a trial and divide by the duration of the trial”. [3]

n 1 Z T (6) r = = ρ(τ)dτ T T 0 The spike-count rate r can also be defined as the time average of ρ(t) over the duration of the trial. The average firing rate can be obtained by averaging r(t) over trials and we get:

hni 1 Z T 1 Z T (7) hri = = hρ(τ)idτ = r(t)dt T T 0 T 0

Measuring Firing Rates: Linear Filter and Filter Kernel The Linear Filter, and the Filter Kernel (or Window Function) are two ways to approximate r(t), or the firing rate. The image below is of the firing rate approximated by sliding the rectangular window function along the spike train, where ∆t is 100 ms. In order to achieve this method you take a window of size ∆t and slide it along the length of the spike train. Then count number of spikes are counted at each interval at each location.

This is an image of the firing rate approximated by the window func- tion, where ∆t = 100ms. As stated by Abbot, ”the firing rate approxi- mated in this way can be expressed as the sum of the window function NEURAL ENCODING 7 over times ti for i = 1, 2, ..., n when the n spikes in a particular sequence occurred,”[3]

n X (8) rapprox(t) = w(t − ti) i=1 Where the window function is: ( 1/∆t if − ∆t/2 ≤ t ≤ ∆t/2 w(t) = 0 otherwise.

[3] The linear filter function is an alternative way of reworking the Window Function. It is ”the integral of the window function times the neural response function. In this way, you can determine at time (t−τ) how the neural response function affects the firing rate.

Tuning Curves Tuning curves functions are used to characterize responses of neurons using one of the attributes of a stimulus, and we use s to denote this single attribute. by writing the average firing rate function as a function of s, we obtained the following:

(9) hri = f(s)

which we referred as the response turning curve and its main purpose is to predict the average firing rate. It can also be sued to characterize neurons in sensory areas and motor areas. There are multiple response tuning curve functions, and the one we are presenting in detail is the Gaussian Tuning Curve.

 1 s − s  (10) f(s) = r exp − ( max )2 max 2 σf

in which s is the orientation angle of the bar, smax is the orien- tation angle evoking the maximum average response rate rmax, and σf is the width of the tuning curve. Here’s an example of the use of the Gaussian tuning curve: 8 KAI LEI AND STEPHANIE WOROBEY

[3]

Diagram A shows extracellular recordings of a neuron in the primary of a monkey. During the recording, a bar of light is shone at different angles across the receptive field of the neuron, which is the area where the cell responds to the light. The angle of orientation of the light dictates how many action potentials are fired. Diagram A is translated into Diagram B using the Gaussian tuning curve form, and diagram B shows that the average firing rate hri depends on the orientation of the light stimulus.

What makes a Neuron Fire? Up to this point, we’ve discussed how to measure action potentials. Now we’ll move on to analyzing various aspects of what, on average happens, before a neuron fires. The spike triggered average can be used to answer this important question. Sensory neurons respond best to rapid changes in stimuli, meaning that they are unlikely to change in neutral environments, or steady states. As stated by Dayan and Abbott ”Steady state responses are highly compressed functions of stimulus intensity, typically with loga- rithmic or weak power-law dependencies”[3]. Weber was particularly interested in this topic, and studied how different,∆s two stimuli had to be so that the neuron could fire. Weber’s law states that ”for a given stimulus, ∆s was proportional to the magnitude of the stimulus s, so that ∆s/s was constant.”[3] Another law, Fechner’s law, described the perceived stimulus intensities. By integrating Weber’s law, the ”the perceived intensity of a stimulus of absolute intensity s varies as log s...”.[3] Sensory systems can adjust so that they are at the average level of stimulus intensity. If this occurs, the function s(t) is then defined so NEURAL ENCODING 9 that it’s time average over the duration of the trial is equal to zero, and we can assume this condition in further calculations. Z T (11) dts(t)/T = 0 0

The Spike Triggered Average C(τ) is the average value of the stimulus at time interval τ before a spike is fired. This function is obtained by first computing s(ti − τ) fora spike occuring at time ti. Next the product is summed over all n spikes and then divide the total by n. Finally we average over trials.

n n  1 X  1  X  (12) C(τ) = s(t − τ) ≈ s(t − τ) n i hni i i=1 i=1 This function is used to relate the spike-triggered average to other quantities in neural encoding. Here’s a schematic description of the spike-triggered average computation.

[3] The grey rectangles contain the stimulus prior to the spikes shown along the time axis, and by averaging them, we obtain the waveform of the average stimulus before a spike. By expressing C(τ) as an intergral of the stimulus times the function of ρ(t), we have the following: 10 KAI LEI AND STEPHANIE WOROBEY

1 Z T 1 Z T (13) C(τ) = dthρ(t)is(t − τ) = dtr(t)s(t − τ) hni 0 hni 0 This equation is important because it relates the spike-triggered av- erage to the correlation function of the firing rate and the stimulus, which gives us the following correlation function:

1 Z T (14) Qrs(τ) = dtr(t)s(t + τ) T 0 Furthermore, when we compare equation 14 and 15, we find:

1 (15) C(τ) = Q (−τ) hri rs where hri = hni/T is the average firing rate over the set of trials. This function is also known as the reverse correlation function because of the argument −τ in the function.

Spike Train The nature of neuronal response is stochastic, meaning that it is random. Because of its stochastic nature, we are introducing a couple of terminology that concerns this aspect of neuronal response. Point process is a stochastic process that generates a sequence of events,[3] and this is applicable for events like action potentials. Renewal pro- cess describes a point process in which the intervals between successive events are independent, and the dependence extends only to the im- mediately preceding events. And when there’s no dependence at all on preceding event, in which the events are all statistically independent, the process is known as the Poisson process. The Poisson process is a useful way to approximate stochastic neuronal responses. In partic- ular, we will look at the homogeneous Poisson Process where the firing rate is constant over time.

The Homogeneous Poisson Process In the homogeneous Poisson process, the firing rate r is constant. Therefore, it generates every sequence of n spikes with equal proba- bility, meaning that the probability P [t1, t2, ..., tn] can be written as a function of PT [n]-the probability that any sequence n spikes occurs NEURAL ENCODING 11 within a trial of duration T. [3] The following function is obtained assuming that the spike times are ordered:

∆t (16) P [t , t , ..., t ] = n!P [n]( )n 1 2 n T T

PT [n] is the product of the following three factors • the probability of generating n spikes within a specified set of the M bins • the probability of not generating spikes in the remaining M − n bins • the combinatorial factor equal to the number of ways of putting n spikes into M bins The probability of a spike occurring in one specific bins is r∆t. The probability of n appearing in n specific bins is (r∆t)n The probability of not having a spike in a given bin is (1 − r∆t) The probability of having the remaining M − n bins without any spikes in them is (1 − r∆t)M−n. Lastly, the number of ways of putting n spikes into M bins M! is (M−n)!n! When we combine all these factors together, we get the following equation:

M! n M−n (17) PT [n] = lim (r∆t) (1 − r∆t) ∆t→0 (M − n)!n!

As ∆t → 0, M grows wihtout bound since M∆t = T . Furthermore, we define  = −r∆t and write M − n ≈ T/∆t (since n is fixed), so

 −rT (18) lim (1 − r∆t)M−n = lim (1 + )1/ = e−rT = exp(−rT ) ∆t→0 →0

1/ M! And since lim→0(1 + ) is e = exp(1). And for large M, (M−n)! ≈ M n = (T/∆t)n, and we obtained the following:

(rT )n (19) P [n] = exp(−rT ) T n!

which is the Poisson distribution. 12 KAI LEI AND STEPHANIE WOROBEY

This is a graph of the Poisson distribution for n = 0, 1, 2, 5. We see that as n increases, the probability reaches its maximum at larger T values and that large n values are more likely than small ones for large T .

The Spike-Train Autocorrelation Function It is important to be able to detect patterns in spike trains. It is in this way that we are able to study their behavior and make conclusions in regards to the effects of varying stimuli. The spike train autocor- relation function is able to measure the distribution of times between any two action potentials in a train, which can reveal much about the behavior of the neuron. This function, as described by Abbott and Dayan, is the autocorrelation of the neural response function with its average over time and trials subtracted out.[3] 1 Z T (20) Qpp(τ) = dth(p(t) − hri)(p(t + τ) − hri)i T 0 [3] From the formula we can see this is the autocovariance of neural response function at times t and t + τ. Meaning it describes how much the function varies with itself at times t and t + τ.

Conclusion The model presented above is one of the fundamental starting models for neural encoding. In this paper we gave an overview of how neu- rons function and how to record action potentials. We then described the neural response function, probability densities, and the spike count rate. We covered two methods of measuring firing rates: the Linear Filter and the Filter Kernel. Tuning curves were also introduced as NEURAL ENCODING 13 ways to characterize the responses of neurons, and they are particu- larly useful when characterizing the selectivity of neurons with respect to sensory areas and its diverse stimulus parameters. The paper also discussed what happens before a neuron fires; including topics such as Weber’s and Fechner’s laws, the spike triggered average, and the re- verse correlation function. Finally, the homogeneous Poisson process and the autocorrelation function were both introduced into our model. The first, to generate a sequence of events (i.e. action potentials); and the second, to examine the distribution of time between two action potentials. Neural encoding has benefited such areas as , machine interface, and the design of training devices for neuro- logical rehabilitation purposes. This model is not only fascinating to mathematicians, it is also medically significant and valuable to the lives of many.

Acknowledgements This paper is a summation of the neural coding algorithms pre- sented by Peter Dayan and L.F.Abbott. Authors gained useful insights through the text Theoretical Computational and Mathe- matical Modeling of Neural Systems.

References [1] B. Albright. Essentials of Mathematical Statistics. Mathematics. Jones & Bartlett Learning, 2013. [2] Amanda Carey. Psych 101 -Neuroscience Slides, Simmons College, 2015. [3] L.F. Abbot P. Dayan. Theoretical Neuroscience Computational and Mathemati- cal Modeling of Neural Systems. Computational Neuroscience. MIT Press, 2005.

Simmons College E-mail address: [email protected]

Simmons College E-mail address: [email protected]