NEURAL ENCODING Introduction There Are Approximately 1000 Billion Neurons in a Human Body, and Each One of Them Are Essential To

Total Page:16

File Type:pdf, Size:1020Kb

NEURAL ENCODING Introduction There Are Approximately 1000 Billion Neurons in a Human Body, and Each One of Them Are Essential To NEURAL ENCODING KAI LEI AND STEPHANIE WOROBEY Math 390 Mathematical Biology, Fall 2015 Advisor: Viktor Grigoryan Simmons College Introduction There are approximately 1000 billion neurons in a human body, and each one of them are essential to our perceptions of senses. The re- sponses initiated by internal and external stimulus enable us to see, hear, smell, taste and touch. The major function of neurons is to transmit information throughout the nervous system. They achieve this by generating electrical pulses called action potentials or spikes and these spikes appear in various patterns. Neural coding examines upon how action potentials represent stim- ulus attributes and this can be done from two opposite perspectives: neural encoding or neural decoding. In particular, we are looking at neural encoding, which is the map from stimulus to response. Before introducing the algorithms that model different aspects of neural en- coding, we will go over the fundamental mechanism of how neurons communicate with each other and also the basic structures of a neuron and their functions. Properties of a Neuron and their Functions [2] Date: Fall 2015. 1 2 KAI LEI AND STEPHANIE WOROBEY The cell body of a neuron is responsible for the biological machinery to keep the cell alive. Dendrites are fibers that project out of the cell body and they collect signals from other neurons. Signals can be classified as either excitatory or inhibitory. [2] Excitatory signals tells the neuron to generate an electrical impulse whereas inhibitory signals tells the neuron not to generate an electrical impulses. When many excitatory signals are received, the neuron reaches threshold of activation and an electrical impulse also known as action potential is generated and would travel down the axon to the axon terminals. The synapse also known as the synaptic gap is a small space between the axon terminals of one neuron and the dendrites of another neuron. The axon terminal contains sacs of neurotransmitters which are nat- urally occurring chemicals that specialize in transmitting information between neurons [2]. Once the action potential reached the axon ter- minals, the neurotransmitter is then released and proceeds to bind to the receptors of the receiving neuron. Next, we introduce methods of recording neuronal responses. How to Record Neuronal Responses There are two ways to record action potentials electrically : either intracellularly or extracellulary. The extracellular method involves first connecting a neuron to electrolyte filled, glass electrode. There are then two ways to record the readings. The first is by inserting a sharp electrode into the cell. The second, by sealing patch electrodes to the surface of the membrane of the cell. The seal of the patch electrode causes the membrane to break, and then the tip of the patch electrode NEURAL ENCODING 3 [3] Figure 1. These are three simulated responses from a neuron. The top one shows an intracellular electrode reading from the soma. The bottom reading shows an intracellular reading from an electrode connected to the axon. The middle trace is an extracellular reading, in which no subthreshold potentials are present. has access to the interior of the cell. It is observed that the subthresh- old membrane potentials can be seen in the soma of neurons, but is not observable in the axon { therefore spikes, but not subthreshold potentials reproduce regeneratively down axons. In extracellular readings, the electrode never pierces the membrane of the cell. The electrode is just placed near the neuron. These kinds of recordings, however, can only record action potentials, and they are incapable of recording subthreshold potentials. Complication of Neural Coding It is often difficult to demonstrate the relationship between stimulus and response because neuronal responses can vary significantly trail from trial even if when the stimulus that's presented is the same dur- ing every trial. The potential factors that vary the responses includes levels of arousal or attention, effects of various biophysical or cognitive processes, etc. For example, when the same person brushes one's arm 4 KAI LEI AND STEPHANIE WOROBEY repeatedly at the same spot, the person being brushed might still feel different every time. This complexity of variations contributes to the unlikelihood of de- termining and predicting when an action potential would occur. And the model we are presenting below "accounts for the probabilities that different spike sequences are evoked by a specific stimulus" [3]. In other words, we are presenting a probabilistic model that would counteract the stochastic nature of neuronal responses. Also we need to take in account of the fact that whenever a stim- ulus is presented, there are usually more than one neuron that would respond. Therefore, aside from investigating the firing pattern of one particular neuron, we also need to look at how these firing patterns relate to each other. Firing Rates The first function in this model is the neural response function. This function is constructed under 3 assumptions. The first assumption is that even though action potentials acts differently depending on their duration, amplitude and shape, we are going to treat them as iden- tical stereotyped event. The second assumption is that an action potential sequence can be represented by a list of the times when the spikes occurred because the timing determines how and when a spike transmits information. So for n number of spikes, we use the notation ti to represent the spike times. The last assumption is that during each trial that the spikes are recorded, we start at time 0 and end at time T , so that puts ti in the interval between 0 and T , inclusively. Based on these 3 assumptions, the spike sequence is represented as a sum of idealized spikes using the Delta function as followed: n X (1) ρ(t) = δ(t − ti) i=1 Delta function is a generalized function on the real number line that is zero everywhere, except at 0, with an integral of one over the entire real line. This function is used because when we model it, its physical nature mirrors what a spike would look like- "it is sometimes thought of as an infinitely high, infinitely thin spike at the origin". ρ(t) is the neural response function and we can use it to re-express sums over spikes as integrals over time. For well-behaved function h(t), we have the following function: NEURAL ENCODING 5 n X Z T (2) h(t − ti) = h(τ)ρ(t − τ)dτ i=1 0 where the integral is expressed over the length of the trial. Using the basic definition of a δ function, we get: Z (3) δ(t − τ)h(τ)dτ = h(t) provided that the limits of the integral surround the point t and if they do not, the integral is zero. Well behaved functions are functions that do not violate the 3 assumptions mentioned above. Neuronal responses are treated probabilistically because action po- tentials generated by the same stimulus can vary trial from trial. If we seek the probability for a spike to occur at any specified time, we would get a 0 value because spike times are continuous variables. Instead, we seek the probability for a spike to occur over a specified time interval between time t and t + ∆t. Furthermore, we use the notation P [ ] to represent probabilities and p[ ] to represent probability densities. We use angle brackets, hi to represent average over trials given the same stimulus. Applying these notations, we use p[t]∆t to represent the probability that a spike occurs between times t and t + ∆t, where p[t] is the single spike probability density. The quantity p[t] can also be defined as the firing rate of the cell, and we use r(t) to denote it. A way to approximate r(t) is to determine the fraction of trials with a given stimulus on which a spike occured between the times t and t + ∆t. For small ∆t and large numbers of trials, this method produces a good approximation under the Law of Large Numbers, which states that: as the number of trials gets larger, the relative frequency approximation of P(A) gets closer to the theoretical value. [1] Next, we use hρ(t)i to represent the trial-averaged neural response function, which produces the fraction of trails on which a spike occurs. Using this relationship, we get the following: Z t+∆t (4) r(t)∆t = hρ(t)idτ t And for well behaved functions h, we replace hρ(t)i with r(t) and we have: 6 KAI LEI AND STEPHANIE WOROBEY Z Z (5) h(τ)hρ(t − τ)idτ = h(τ)r(t − τ)dτ This function is important because it demonstrates the relationship between hρ(t)i and r(t). Another important quantity in neural encoding is the spike-count rate r and we get this value by "counting the number of action poten- tials that appear during a trial and divide by the duration of the trial". [3] n 1 Z T (6) r = = ρ(τ)dτ T T 0 The spike-count rate r can also be defined as the time average of ρ(t) over the duration of the trial. The average firing rate can be obtained by averaging r(t) over trials and we get: hni 1 Z T 1 Z T (7) hri = = hρ(τ)idτ = r(t)dt T T 0 T 0 Measuring Firing Rates: Linear Filter and Filter Kernel The Linear Filter, and the Filter Kernel (or Window Function) are two ways to approximate r(t), or the firing rate. The image below is of the firing rate approximated by sliding the rectangular window function along the spike train, where ∆t is 100 ms.
Recommended publications
  • Neural Coding and the Statistical Modeling of Neuronal Responses By
    Neural coding and the statistical modeling of neuronal responses by Jonathan Pillow A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Center for Neural Science New York University Jan 2005 Eero P. Simoncelli TABLE OF CONTENTS LIST OF FIGURES v INTRODUCTION 1 1 Characterization of macaque retinal ganglion cell responses using spike-triggered covariance 5 1.1NeuralCharacterization.................... 9 1.2One-DimensionalModelsandtheSTA............ 12 1.3Multi-DimensionalModelsandSTCAnalysis........ 16 1.4 Separability and Subspace STC ................ 21 1.5 Subunit model . ....................... 23 1.6ModelValidation........................ 25 1.7Methods............................. 28 2 Estimation of a Deterministic IF model 37 2.1Leakyintegrate-and-firemodel................. 40 2.2Simulationresultsandcomparison............... 41 ii 2.3Recoveringthelinearkernel.................. 42 2.4Recoveringakernelfromneuraldata............. 44 2.5Discussion............................ 46 3 Estimation of a Stochastic, Recurrent IF model 48 3.1TheModel............................ 52 3.2TheEstimationProblem.................... 54 3.3ComputationalMethodsandNumericalResults....... 57 3.4TimeRescaling......................... 60 3.5Extensions............................ 61 3.5.1 Interneuronalinteractions............... 62 3.5.2 Nonlinear input ..................... 63 3.6Discussion............................ 64 AppendixA:ProofofLog-ConcavityofModelLikelihood..... 65 AppendixB:ComputingtheLikelihoodGradient........
    [Show full text]
  • Neural Oscillations As a Signature of Efficient Coding in the Presence of Synaptic Delays Matthew Chalk1*, Boris Gutkin2,3, Sophie Dene` Ve2
    RESEARCH ARTICLE Neural oscillations as a signature of efficient coding in the presence of synaptic delays Matthew Chalk1*, Boris Gutkin2,3, Sophie Dene` ve2 1Institute of Science and Technology Austria, Klosterneuburg, Austria; 2E´ cole Normale Supe´rieure, Paris, France; 3Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia Abstract Cortical networks exhibit ’global oscillations’, in which neural spike times are entrained to an underlying oscillatory rhythm, but where individual neurons fire irregularly, on only a fraction of cycles. While the network dynamics underlying global oscillations have been well characterised, their function is debated. Here, we show that such global oscillations are a direct consequence of optimal efficient coding in spiking networks with synaptic delays and noise. To avoid firing unnecessary spikes, neurons need to share information about the network state. Ideally, membrane potentials should be strongly correlated and reflect a ’prediction error’ while the spikes themselves are uncorrelated and occur rarely. We show that the most efficient representation is when: (i) spike times are entrained to a global Gamma rhythm (implying a consistent representation of the error); but (ii) few neurons fire on each cycle (implying high efficiency), while (iii) excitation and inhibition are tightly balanced. This suggests that cortical networks exhibiting such dynamics are tuned to achieve a maximally efficient population code. DOI: 10.7554/eLife.13824.001 *For correspondence: Introduction [email protected] Oscillations are a prominent feature of cortical activity. In sensory areas, one typically observes Competing interests: The ’global oscillations’ in the gamma-band range (30–80 Hz), alongside single neuron responses that authors declare that no are irregular and sparse (Buzsa´ki and Wang, 2012; Yu and Ferster, 2010).
    [Show full text]
  • Local Field Potential Decoding of the Onset and Intensity of Acute Pain In
    www.nature.com/scientificreports OPEN Local feld potential decoding of the onset and intensity of acute pain in rats Received: 25 January 2018 Qiaosheng Zhang1, Zhengdong Xiao2,3, Conan Huang1, Sile Hu2,3, Prathamesh Kulkarni1,3, Accepted: 8 May 2018 Erik Martinez1, Ai Phuong Tong1, Arpan Garg1, Haocheng Zhou1, Zhe Chen 3,4 & Jing Wang1,4 Published: xx xx xxxx Pain is a complex sensory and afective experience. The current defnition for pain relies on verbal reports in clinical settings and behavioral assays in animal models. These defnitions can be subjective and do not take into consideration signals in the neural system. Local feld potentials (LFPs) represent summed electrical currents from multiple neurons in a defned brain area. Although single neuronal spike activity has been shown to modulate the acute pain, it is not yet clear how ensemble activities in the form of LFPs can be used to decode the precise timing and intensity of pain. The anterior cingulate cortex (ACC) is known to play a role in the afective-aversive component of pain in human and animal studies. Few studies, however, have examined how neural activities in the ACC can be used to interpret or predict acute noxious inputs. Here, we recorded in vivo extracellular activity in the ACC from freely behaving rats after stimulus with non-noxious, low-intensity noxious, and high-intensity noxious stimuli, both in the absence and chronic pain. Using a supervised machine learning classifer with selected LFP features, we predicted the intensity and the onset of acute nociceptive signals with high degree of precision.
    [Show full text]
  • Improving Whole-Brain Neural Decoding of Fmri with Domain Adaptation
    This is a repository copy of Improving whole-brain neural decoding of fMRI with domain adaptation. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/136411/ Version: Submitted Version Article: Zhou, S., Cox, C. and Lu, H. orcid.org/0000-0002-0349-2181 (Submitted: 2018) Improving whole-brain neural decoding of fMRI with domain adaptation. bioRxiv. (Submitted) https://doi.org/10.1101/375030 © 2018 The Author(s). For reuse permissions, please contact the Author(s). Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Improving Whole-Brain Neural Decoding of fMRI with Domain Adaptation Shuo Zhoua, Christopher R. Coxb,c, Haiping Lua,d,∗ aDepartment of Computer Science, the University of Sheffield, Sheffield, UK bSchool of Biological Sciences, the University of Manchester, Manchester, UK cDepartment of Psychology, Louisiana State University, Baton Rouge, Louisiana, USA dSheffield Institute for Translational Neuroscience, Sheffield, UK Abstract In neural decoding, there has been a growing interest in machine learning on whole-brain functional magnetic resonance imaging (fMRI).
    [Show full text]
  • Deep Learning Approaches for Neural Decoding: from Cnns to Lstms and Spikes to Fmri
    Deep learning approaches for neural decoding: from CNNs to LSTMs and spikes to fMRI Jesse A. Livezey1,2,* and Joshua I. Glaser3,4,5,* [email protected], [email protected] *equal contribution 1Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States 2Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States 3Department of Statistics, Columbia University, New York, United States 4Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States 5Center for Theoretical Neuroscience, Columbia University, New York, United States May 21, 2020 Abstract Decoding behavior, perception, or cognitive state directly from neural signals has applications in brain-computer interface research as well as implications for systems neuroscience. In the last decade, deep learning has become the state-of-the-art method in many machine learning tasks ranging from speech recognition to image segmentation. The success of deep networks in other domains has led to a new wave of applications in neuroscience. In this article, we review deep learning approaches to neural decoding. We describe the architectures used for extracting useful features from neural recording modalities ranging from spikes to EEG. Furthermore, we explore how deep learning has been leveraged to predict common outputs including movement, speech, and vision, with a focus on how pretrained deep networks can be incorporated as priors for complex decoding targets like acoustic speech or images. Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks, and we point out areas for future scientific development.
    [Show full text]
  • Neural Decoding of Collective Wisdom with Multi-Brain Computing
    NeuroImage 59 (2012) 94–108 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Review Neural decoding of collective wisdom with multi-brain computing Miguel P. Eckstein a,b,⁎, Koel Das a,b, Binh T. Pham a, Matthew F. Peterson a, Craig K. Abbey a, Jocelyn L. Sy a, Barry Giesbrecht a,b a Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93101, USA b Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93101, USA article info abstract Article history: Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon Received 7 April 2011 known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its Revised 27 June 2011 benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and Accepted 4 July 2011 multi-brain computing with twenty humans making perceptual decisions to show that combining neural Available online 14 July 2011 activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision Keywords: Multi-brain computing variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in Collective wisdom substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme Neural decoding opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and Perceptual decisions time-points while increasing number of brains demonstrate unique benefits arising from integrating neural Group decisions activity across different brains.
    [Show full text]
  • An Optogenetic Approach to Understanding the Neural Circuits of Fear Joshua P
    Controlling the Elements: An Optogenetic Approach to Understanding the Neural Circuits of Fear Joshua P. Johansen, Steffen B.E. Wolff, Andreas Lüthi, and Joseph E. LeDoux Neural circuits underlie our ability to interact in the world and to learn adaptively from experience. Understanding neural circuits and how circuit structure gives rise to neural firing patterns or computations is fundamental to our understanding of human experience and behavior. Fear conditioning is a powerful model system in which to study neural circuits and information processing and relate them to learning and behavior. Until recently, technological limitations have made it difficult to study the causal role of specific circuit elements during fear conditioning. However, newly developed optogenetic tools allow researchers to manipulate individual circuit components such as anatom- ically or molecularly defined cell populations, with high temporal precision. Applying these tools to the study of fear conditioning to control specific neural subpopulations in the fear circuit will facilitate a causal analysis of the role of these circuit elements in fear learning and memory. By combining this approach with in vivo electrophysiological recordings in awake, behaving animals, it will also be possible to determine the functional contribution of specific cell populations to neural processing in the fear circuit. As a result, the application of optogenetics to fear conditioning could shed light on how specific circuit elements contribute to neural coding and to fear learning and memory. Furthermore, this approach may reveal general rules for how circuit structure and neural coding within circuits gives rise to sensory experience and behavior. Key Words: Electrophysiology, fear conditioning, learning and circuits and the identification of sites of neural plasticity in these memory, neural circuits, neural plasticity, optogenetics circuits.
    [Show full text]
  • 6 Optogenetic Actuation, Inhibition, Modulation and Readout for Neuronal Networks Generating Behavior in the Nematode Caenorhabditis Elegans
    Mario de Bono, William R. Schafer, Alexander Gottschalk 6 Optogenetic actuation, inhibition, modulation and readout for neuronal networks generating behavior in the nematode Caenorhabditis elegans 6.1 Introduction – the nematode as a genetic model in systems neurosciencesystems neuroscience Elucidating the mechanisms by which nervous systems process information and gen- erate behavior is among the fundamental problems of biology. The complexity of our brain and plasticity of our behaviors make it challenging to understand even simple human actions in terms of molecular mechanisms and neural activity. However the molecular machines and operational features of our neural circuits are often found in invertebrates, so that studying flies and worms provides an effective way to gain insights into our nervous system. Caenorhabditis elegans offers special opportunities to study behavior. Each of the 302 neurons in its nervous system can be identified and imaged in live animals [1, 2], and manipulated transgenically using specific promoters or promoter combinations [3, 4, 5, 6]. The chemical synapses and gap junctions made by every neuron are known from electron micrograph reconstruction [1]. Importantly, forward genetics can be used to identify molecules that modulate C. elegans’ behavior. Forward genetic dis- section of behavior is powerful because it requires no prior knowledge. It allows mol- ecules to be identified regardless of in vivo concentration, and focuses attention on genes that are functionally important. The identity and expression patterns of these molecules then provide entry points to study the molecular mechanisms and neural circuits controlling the behavior. Genetics does not provide the temporal resolution required to study neural circuit function directly.
    [Show full text]
  • Provably Optimal Design of a Brain-Computer Interface Yin Zhang
    Provably Optimal Design of a Brain-Computer Interface Yin Zhang CMU-RI-TR-18-63 August 2018 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Steven M. Chase, Carnegie Mellon University, Chair Robert E. Kass, Carnegie Mellon University J. Andrew Bagnell, Carnegie Mellon University Patrick J. Loughlin, University of Pittsburgh Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Robotics. Copyright c 2018 Yin Zhang Keywords: Brain-Computer Interface, Closed-Loop System, Optimal Control Abstract Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. Over the past two decades, much attention has been devoted to the decoding problem: how should recorded neural activity be translated into movement of the device in order to achieve the most pro- ficient control? This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device, to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to per- formance that might be achieved with a non-biomimetic device, once the subject has engaged in extended practice and learned how to use it. In this thesis, I first recast the decoder design problem from a physical control system perspective, and investigate how various classes of decoders lead to different types of physical systems for the subject to control.
    [Show full text]
  • Exploring the Function of Neural Oscillations in Early Sensory Systems
    FOCUSED REVIEW published: 15 May 2010 doi: 10.3389/neuro.01.010.2010 Exploring the function of neural oscillations in early sensory systems Kilian Koepsell1*, Xin Wang 2, Judith A. Hirsch 2 and Friedrich T. Sommer1* 1 Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA, USA 2 Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA Neuronal oscillations appear throughout the nervous system, in structures as diverse as the cerebral cortex, hippocampus, subcortical nuclei and sense organs. Whether neural rhythms contribute to normal function, are merely epiphenomena, or even interfere with physiological processing are topics of vigorous debate. Sensory pathways are ideal for investigation of oscillatory activity because their inputs can be defined. Thus, we will focus on sensory systems Edited by: S. Murray Sherman, as we ask how neural oscillations arise and how they might encode information about the University of Chicago, USA stimulus. We will highlight recent work in the early visual pathway that shows how oscillations Reviewed by: can multiplex different types of signals to increase the amount of information that spike trains Michael J. Friedlander, Baylor College of Medicine, USA encode and transmit. Last, we will describe oscillation-based models of visual processing and Jose-Manuel Alonso, Sociedad Espanola explore how they might guide further research. de Neurociencia, Spain; University of Connecticut, USA; State University Keywords: LGN, retina, visual coding, oscillations, multiplexing of New York, USA Naoum P. Issa, University of Chicago, USA NEURAL OSCILLATIONS IN EARLY However, the autocorrelograms are subject to *Correspondence: SENSORY SYSTEMS confounds caused by the refractory period and Oscillatory neural activity has been observed spectral peaks often fail to reveal weak rhythms.
    [Show full text]
  • Neural Networks for Efficient Bayesian Decoding of Natural Images From
    Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons Nikhil Parthasarathy∗ Eleanor Batty∗ William Falcon Stanford University Columbia University Columbia University [email protected] [email protected] [email protected] Thomas Rutten Mohit Rajpal E.J. Chichilniskyy Columbia University Columbia University Stanford University [email protected] [email protected] [email protected] Liam Paninskiy Columbia University [email protected] Abstract Decoding sensory stimuli from neural signals can be used to reveal how we sense our physical environment, and is valuable for the design of brain-machine interfaces. However, existing linear techniques for neural decoding may not fully reveal or ex- ploit the fidelity of the neural signal. Here we develop a new approximate Bayesian method for decoding natural images from the spiking activity of populations of retinal ganglion cells (RGCs). We sidestep known computational challenges with Bayesian inference by exploiting artificial neural networks developed for computer vision, enabling fast nonlinear decoding that incorporates natural scene statistics implicitly. We use a decoder architecture that first linearly reconstructs an image from RGC spikes, then applies a convolutional autoencoder to enhance the image. The resulting decoder, trained on natural images and simulated neural responses, significantly outperforms linear decoding, as well as simple point-wise nonlinear decoding. These results provide a tool for the assessment and optimization of reti- nal prosthesis technologies, and reveal that the retina may provide a more accurate representation of the visual scene than previously appreciated. 1 Introduction Neural coding in sensory systems is often studied by developing and testing encoding models that capture how sensory inputs are represented in neural signals.
    [Show full text]
  • A Barrel of Spikes Reach out and Touch
    HIGHLIGHTS NEUROANATOMY Reach out and touch Dendritic spines are a popular sub- boutons (where synapses are formed ject these days. One theory is that on the shaft of the axon) would occur they exist, at least in part, to allow when dendritic spines were able to axons to synapse with dendrites bridge the gap between dendrite and without deviating from a nice axon. They carried out a detailed straight path through the brain. But, morphological study of spiny neuron as Anderson and Martin point out in axons in cat cortex to test this Nature Neuroscience, axons can also hypothesis. form spine-like structures, known as Perhaps surprisingly, they found terminaux boutons. Might these also that the two types of synaptic bouton be used to maintain economic, did not differentiate between den- straight axonal trajectories? dritic spines and shafts. Axons were Anderson and Martin proposed just as likely to use terminaux boutons that, if this were the case, the two to contact dendritic spines as they types of axonal connection — were to use en passant boutons, even terminaux boutons and en passant in cases where it would seem that the boutons — would each make path of the axon would allow it to use synapses preferentially with different an en passant bouton. So, why do types of dendritic site. Terminaux axons bother to construct these deli- boutons would be used to ‘reach out’ cate structures, if not simply to con- to dendritic shafts, whereas en passant nect to out-of-the-way dendrites? NEURAL CODING exploring an environment, a rat will use its whiskers to collect information by sweeping them backwards and forwards, and it could use a copy of the motor command to predict A barrel of spikes when the stimulus was likely to have occurred.
    [Show full text]