Receptive Fields

Total Page:16

File Type:pdf, Size:1020Kb

Receptive Fields Introduction to Neural Computation Prof. Michale Fee MIT BCS 9.40 — 2018 Lecture 9 — Receptive fields 1 Spatial receptive fields Video of visual neurons of a cat from Hubel & Wiesel’s experiments. LIGHT Ali Moeeny. “ Hubel & Wiesel – LGN Neuron.” April 23, 2011. YouTube. Figure (annotated) from Introduction to Visual Prostheses on Webvision. LicenseC C BY-NC. 2 Spatial receptive fields Video of visual neurons of a cat – simple and complex cells - from Hubel & Wiesel’s experiments. LIGHT Ali Moeeny. “ Hubel & Wiesel – Cortical Neuron – V1.” April 23, 2011. YouTube. Figure (annotated) from Introduction to Visual Prostheses on Webvision. LicenseC C BY-NC. 3 Learning objectives for Lecture 9 • To be able to mathematically describe a neural response as a linear filter followed by a nonlinear function. – A correlation of a spatial receptive field with the stimulus – A convolution of a temporal receptive field with the stimulus • To understand the concept of a Spatio-temporal Receptive Field (STRF) and the concept of ‘separability’ • To understand the idea of a Spike Triggered Average and how to use it to compute a Spatio-temporal Receptive Field and a Spectro-temporal Receptive Field (STRF). 4 Learning objectives for Lecture 9 • To be able to mathematically describe a neural response as a linear filter followed by a nonlinear function. – A correlation of a spatial receptive field with the stimulus – A convolution of a temporal receptive field with the stimulus • To understand the concept of a Spatio-temporal Receptive Field (STRF) and the concept of ‘separability’ • To understand the idea of a Spike Triggered Average and how to use it to compute a Spatio-temporal Receptive Field and a Spectro-temporal Receptive Field (STRF). 5 Spatial receptive fields • How do we represent receptive fields mathematically? • At the simplest level, we think of the receptive field (RF) as the region of visual space that causes the neuron to spike. • But a visual neuron doesn’t respond to any stimulus within this RF. It responds selectively to certain ‘features’ in the stimulus. • We can think of a neuron as having a filter (G) that passes certain features in both space and time. • The better the stimulus ‘overlaps’ with the filter, the more the neuron will spike. 6 Spatial receptive fields • How do we represent receptive fields mathematically? Start by describing the spatial part of this filter. output spike filter x nonlinearity generator y r r L I(x, y) L = GI[ (x, y)] = [ 0 + ]+ PT [n] stimulus Poisson process response filter stimulus firing rate Image of mouse in public domain. 7 Spatial receptive fields • How do we represent receptive fields mathematically? We are going to consider the simplest case in which the response of a neuron is given by a linear filter acting on the stimulus. y G(x, y) r = r + G(x, y)I(x, y)dx dy 0 ∫∫ x Let’s look at this in one dimension G(x) r = r + G(x)I(x)dx 0 ∫ x Like a correlation ∑Gi Ii i 8 Spatial receptive fields • How do we represent receptive fields mathematically? G(x) G(x) x I(x) I(x) x G(x)I(x) G(x)I(x) x ∫ G(x)I(x)dx big ∫ G(x)I(x)dx small 9 Linearity • Response varies linearly with overlap G(x) G(x) x I(x) I(x) x G(x)I(x) G(x)I(x) x ∫ G(x)I(x)dx big ∫ G(x)I(x)dx half as big 10 Temporal receptive fields • We can also think of the response of a neuron as some function of the temporal variations in the stimulus. r(t) = r0 + D [ S(t) ] Time-dependent Spontaneous Filter Stimulus firing rate firing rate 11 Temporal receptive fields • We can think of ‘overlap’ in the time domain! That there is a particular ‘temporal profile’ of a stimulus that makes a neuron spike. S(t) time Does this look familiar? 12 Temporal receptive fields ∞ r(t) = r + D(τ ) (t −τ )dτ Convolution!! 0 ∫ S −∞ D(τ ) τ Linear temporal response kernel. (Or ‘temporal kernel’) It is linear in the sense that if we make the stimulus partial, or weaker, the response changes linearly. 13 Learning objectives for Lecture 9 • To be able to mathematically describe a neural response as a linear filter followed by a nonlinear function. – A correlation of a spatial receptive field with the stimulus – A convolution of a temporal receptive field with the stimulus • To understand the concept of a Spatio-temporal Receptive Field (STRF) and the concept of ‘separability’ • To understand the idea of a Spike Triggered Average and how to use it to compute a Spatio-temporal Receptive Field and a Spectro-temporal Receptive Field (STRF). 14 Spatio-temporal receptive fields • Now we are going to put the temporal receptive field and the spatial receptive field together in a single object. • This is called the spatio-temporal receptive field (STRF). • Let’s imagine a stimulus that is a function of space and time, like the light falling on a retina: I(x, y,t) • But now we are going to simplify things by considering only one spatial dimension: I(x,t) ∞ r(t) = r + dx dτ D(x,τ )I(x,t −τ ) 0 ∫ −∞ 15 Spatio-temporal receptive fields Here we are doing a correlation and a convolution at the same time! Correlation in the integral over space and a convolution in the integral over time! r(t) = r + dx dτ D(x,τ )I(x,t −τ ) 0 ∫∫ correlation ∞ ∞ r(t) = r + dτ dx D(x,τ )I(x,t −τ ) 0 ∫ ∫ −∞ −∞ convolution 16 Separability • If a receptive field is separable in space and time, then we can decompose it into a spatial receptive field and a temporal receptive field. Separable Inseparable x x D(x,τ ) D(x,τ ) GS (x) - + τ τ DT (τ ) 17 Separability • If a receptive field is separable in space and time, then we can decompose it into a spatial receptive field and a temporal receptive field: ∞ ∞ r(t) = r + dx dτ D(x,τ )I(x,t −τ ) 0 ∫ ∫ −∞ −∞ D(x,τ ) =GS (x)DT (τ ) ∞ S(t) = ∫ dxG(x)I(x,t) Correlation −∞ where ∞ r(t) = r + dτ D (τ ) (t −τ ) Convolution 0 ∫ T S −∞ 18 Representing stimulus and receptive fields in space and time Suppose our stimulus is a bar of light extending from x=2 to x=4 and that is turned on for times from t=1 to t=6 We represent it on a space-time plot as follows: space r.f. 4 light 2 time 1 6 firing rate time 19 Representing stimulus and receptive fields in space and time Suppose we now consider a receptive field D(t,x) with spatial as well as temporal structure (but ‘space-time separable’: D(t,x) = D(t)D(x) ) space 4 light 2 time 1 6 firing rate time 20 Response to a moving bar of light Now suppose our stimulus is moving: light space 4 2 time 1 6 firing rate time 21 Response to a moving bar of light Now suppose our stimulus is moving in opposite direction: light space 4 2 time 1 6 firing rate time 22 Learning objectives for Lecture 9 • To be able to mathematically describe a neural response as a linear filter followed by a nonlinear function. – A correlation of a spatial receptive field with the stimulus – A convolution of a temporal receptive field with the stimulus • To understand the concept of a Spatio-temporal Receptive Field (STRF) and the concept of ‘separability’ • To understand the idea of a Spike Triggered Average and how to use it to compute a Spatio-temporal Receptive Field and a Spectro-temporal Receptive Field (STRF). 23 Spike-Triggered Average (t) S spikes + Sum over n spikes + 1 n K(τ ) = ∑S(ti −τ ) n i=1 Average over trials = n K(τ ) 1 τ K(τ ) = S(ti −τ ) n ∑ i=1 Trials 24 Measuring STRFs in the retina Marcus Meister Slides pp 25-31 © Marcus Meister. All rights reserved. This content is excluded from our Creative Commons license. 25 For more information, see https://ocw.mit.edu/help/faq-fair-use/. Simultaneous Recording from Retinal Ganglion Cells V) µ Voltage ( 200 0 0.0 Time (s) 0.2 Slides pp 25-31 © Marcus Meister. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/. 26 60 Trial Rabbit ganglion cells responding to a natural movie “Trees swaying in the breeze” 0 12 14 16 18 20 160 0 80 0 300 0 Firing rate (Hz) 400 0 12 14 16 18 20 Time (s) Marcus Meister Slides pp 25-31 © Marcus Meister. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/. 27 Measuring STRFs in the retina Random flicker stimulus Marcus Meister Slides pp 25-31 © Marcus Meister. All rights reserved. This content is excluded from our Creative Commons license. 28 For more information, see https://ocw.mit.edu/help/faq-fair-use/. Reverse-Correlation to a Random Flicker Stimulus Response Stimulus Spike-triggered average Marcus Meister Slides pp 25-31 © Marcus Meister. All rights reserved. This content is excluded from our Creative Commons license. 29 For more information, see https://ocw.mit.edu/help/faq-fair-use/. Spatio-Temporal Receptive Fields (STRF) See Lecture 9 video to view the above clips. Marcus Meister Slides pp 25-31 © Marcus Meister. All rights reserved.
Recommended publications
  • MITOCW | 9: Receptive Fields - Intro to Neural Computation
    MITOCW | 9: Receptive Fields - Intro to Neural Computation MICHALE FEE: So today, we're going to introduce a new topic, which is related to the idea of fine- tuning curves, and that is the notion of receptive fields. So most of you have probably been, at least those of you who've taken 9.01 or 9.00 maybe, have been exposed to the idea of what a receptive field is. The idea is basically that in sensory systems neurons receive input from the sensory periphery, and neurons generally have some kind of sensory stimulus that causes them to spike. And so one of the classic examples of how to find receptive fields comes from the work of Huble and Wiesel. So I'll show you some movies made from early experiments of Huble-Wiesel where they are recording in the visual cortex of the cat. So they place a fine metal electrode into a primary visual cortex, and they present. So then they anesthetize the cat so the cat can't move. They open the eye, and the cat's now looking at a screen that looks like this, where they play a visual stimulus. And they actually did this with essentially a slide projector that they could put a card in front of that had a little hole in it, for example, that allowed a spot of light to project onto the screen. And then they can move that spot of light around while they record from neurons in visual cortex and present different visual stimuli to the retina.
    [Show full text]
  • The Roles and Functions of Cutaneous Mechanoreceptors Kenneth O Johnson
    455 The roles and functions of cutaneous mechanoreceptors Kenneth O Johnson Combined psychophysical and neurophysiological research has nerve ending that is sensitive to deformation in the resulted in a relatively complete picture of the neural mechanisms nanometer range. The layers function as a series of of tactile perception. The results support the idea that each of the mechanical filters to protect the extremely sensitive recep- four mechanoreceptive afferent systems innervating the hand tor from the very large, low-frequency stresses and strains serves a distinctly different perceptual function, and that tactile of ordinary manual labor. The Ruffini corpuscle, which is perception can be understood as the sum of these functions. located in the connective tissue of the dermis, is a rela- Furthermore, the receptors in each of those systems seem to be tively large spindle shaped structure tied into the local specialized for their assigned perceptual function. collagen matrix. It is, in this way, similar to the Golgi ten- don organ in muscle. Its association with connective tissue Addresses makes it selectively sensitive to skin stretch. Each of these Zanvyl Krieger Mind/Brain Institute, 338 Krieger Hall, receptor types and its role in perception is discussed below. The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218-2689, USA; e-mail: [email protected] During three decades of neurophysiological and combined Current Opinion in Neurobiology 2001, 11:455–461 psychophysical and neurophysiological studies, evidence has accumulated that links each of these afferent types to 0959-4388/01/$ — see front matter © 2001 Elsevier Science Ltd. All rights reserved. a distinctly different perceptual function and, furthermore, that shows that the receptors innervated by these afferents Abbreviations are specialized for their assigned functions.
    [Show full text]
  • Center Surround Receptive Field Structure of Cone Bipolar Cells in Primate Retina
    Vision Research 40 (2000) 1801–1811 www.elsevier.com/locate/visres Center surround receptive field structure of cone bipolar cells in primate retina Dennis Dacey a,*, Orin S. Packer a, Lisa Diller a, David Brainard b, Beth Peterson a, Barry Lee c a Department of Biological Structure, Uni6ersity of Washington, Box 357420, Seattle, WA 98195-7420, USA b Department of Psychology, Uni6ersity of California Santa Barbara, Santa Barbara, CA, USA c Max Planck Institute for Biophysical Chemistry, Gottingen, Germany Received 28 September 1999; received in revised form 5 January 2000 Abstract In non-mammalian vertebrates, retinal bipolar cells show center-surround receptive field organization. In mammals, recordings from bipolar cells are rare and have not revealed a clear surround. Here we report center-surround receptive fields of identified cone bipolar cells in the macaque monkey retina. In the peripheral retina, cone bipolar cell nuclei were labeled in vitro with diamidino-phenylindole (DAPI), targeted for recording under microscopic control, and anatomically identified by intracellular staining. Identified cells included ‘diffuse’ bipolar cells, which contact multiple cones, and ‘midget’ bipolar cells, which contact a single cone. Responses to flickering spots and annuli revealed a clear surround: both hyperpolarizing (OFF) and depolarizing (ON) cells responded with reversed polarity to annular stimuli. Center and surround dimensions were calculated for 12 bipolar cells from the spatial frequency response to drifting, sinusoidal luminance modulated gratings. The frequency response was bandpass and well fit by a difference of Gaussians receptive field model. Center diameters were all two to three times larger than known dendritic tree diameters for both diffuse and midget bipolar cells in the retinal periphery.
    [Show full text]
  • The Visual System: Higher Visual Processing
    The Visual System: Higher Visual Processing Primary visual cortex The primary visual cortex is located in the occipital cortex. It receives visual information exclusively from the contralateral hemifield, which is topographically represented and wherein the fovea is granted an extended representation. Like most cortical areas, primary visual cortex consists of six layers. It also contains, however, a prominent stripe of white matter in its layer 4 - the stripe of Gennari - consisting of the myelinated axons of the lateral geniculate nucleus neurons. For this reason, the primary visual cortex is also referred to as the striate cortex. The LGN projections to the primary visual cortex are segregated. The axons of the cells from the magnocellular layers terminate principally within sublamina 4Ca, and those from the parvocellular layers terminate principally within sublamina 4Cb. Ocular dominance columns The inputs from the two eyes also are segregated within layer 4 of primary visual cortex and form alternating ocular dominance columns. Alternating ocular dominance columns can be visualized with autoradiography after injecting radiolabeled amino acids into one eye that are transported transynaptically from the retina. Although the neurons in layer 4 are monocular, neurons in the other layers of the same column combine signals from the two eyes, but their activation has the same ocular preference. Bringing together the inputs from the two eyes at the level of the striate cortex provide a basis for stereopsis, the sensation of depth perception provided by binocular disparity, i.e., when an image falls on non-corresponding parts of the two retinas. Some neurons respond to disparities beyond the plane of fixation (far cells), while others respond to disparities in front of the plane of the fixation (near cells).
    [Show full text]
  • Seeing in Three Dimensions: the Neurophysiology of Stereopsis Gregory C
    Review DeAngelis – Neurophysiology of stereopsis Seeing in three dimensions: the neurophysiology of stereopsis Gregory C. DeAngelis From the pair of 2-D images formed on the retinas, the brain is capable of synthesizing a rich 3-D representation of our visual surroundings. The horizontal separation of the two eyes gives rise to small positional differences, called binocular disparities, between corresponding features in the two retinal images. These disparities provide a powerful source of information about 3-D scene structure, and alone are sufficient for depth perception. How do visual cortical areas of the brain extract and process these small retinal disparities, and how is this information transformed into non-retinal coordinates useful for guiding action? Although neurons selective for binocular disparity have been found in several visual areas, the brain circuits that give rise to stereoscopic vision are not very well understood. I review recent electrophysiological studies that address four issues: the encoding of disparity at the first stages of binocular processing, the organization of disparity-selective neurons into topographic maps, the contributions of specific visual areas to different stereoscopic tasks, and the integration of binocular disparity and viewing-distance information to yield egocentric distance. Some of these studies combine traditional electrophysiology with psychophysical and computational approaches, and this convergence promises substantial future gains in our understanding of stereoscopic vision. We perceive our surroundings vividly in three dimen- lished the first reports of disparity-selective neurons in the sions, even though the image formed on the retina of each primary visual cortex (V1, or area 17) of anesthetized cats5,6.
    [Show full text]
  • COGS 101A: Sensation and Perception 1
    COGS 101A: Sensation and Perception 1 Virginia R. de Sa Department of Cognitive Science UCSD Lecture 4: Coding Concepts – Chapter 2 Course Information 2 • Class web page: http://cogsci.ucsd.edu/ desa/101a/index.html • Professor: Virginia de Sa ? I’m usually in Chemistry Research Building (CRB) 214 (also office in CSB 164) ? Office Hours: Monday 5-6pm ? email: desa at ucsd ? Research: Perception and Learning in Humans and Machines For your Assistance 3 TAS: • Jelena Jovanovic OH: Wed 2-3pm CSB 225 • Katherine DeLong OH: Thurs noon-1pm CSB 131 IAS: • Jennifer Becker OH: Fri 10-11am CSB 114 • Lydia Wood OH: Mon 12-1pm CSB 114 Course Goals 4 • To appreciate the difficulty of sensory perception • To learn about sensory perception at several levels of analysis • To see similarities across the sensory modalities • To become more attuned to multi-sensory interactions Grading Information 5 • 25% each for 2 midterms • 32% comprehensive final • 3% each for 6 lab reports - due at the end of the lab • Bonus for participating in a psych or cogsci experiment AND writing a paragraph description of the study You are responsible for knowing the lecture material and the assigned readings. Read the readings before class and ask questions in class. Academic Dishonesty 6 The University policy is linked off the course web page. You will all have to sign a form in section For this class: • Labs are done in small groups but writeups must be in your own words • There is no collaboration on midterms and final exam Last Class 7 We learned about the cells in the Retina This Class 8 Coding Concepts – Every thing after transduction (following Chris Johnson’s notes) Remember:The cells in the retina are neurons 9 neurons are specialized cells that transmit electrical/chemical information to other cells neurons generally receive a signal at their dendrites and transmit it electrically to their soma and axon.
    [Show full text]
  • Understanding the Effective Receptive Field in Deep Convolutional Neural Networks
    Understanding the Effective Receptive Field in Deep Convolutional Neural Networks Wenjie Luo∗ Yujia Li∗ Raquel Urtasun Richard Zemel Department of Computer Science University of Toronto {wenjie, yujiali, urtasun, zemel}@cs.toronto.edu Abstract We study characteristics of receptive fields of units in deep convolutional networks. The receptive field size is a crucial issue in many visual tasks, as the output must respond to large enough areas in the image to capture information about large objects. We introduce the notion of an effective receptive field, and show that it both has a Gaussian distribution and only occupies a fraction of the full theoretical receptive field. We analyze the effective receptive field in several architecture designs, and the effect of nonlinear activations, dropout, sub-sampling and skip connections on it. This leads to suggestions for ways to address its tendency to be too small. 1 Introduction Deep convolutional neural networks (CNNs) have achieved great success in a wide range of problems in the last few years. In this paper we focus on their application to computer vision: where they are the driving force behind the significant improvement of the state-of-the-art for many tasks recently, including image recognition [10, 8], object detection [17, 2], semantic segmentation [12, 1], image captioning [20], and many more. One of the basic concepts in deep CNNs is the receptive field, or field of view, of a unit in a certain layer in the network. Unlike in fully connected networks, where the value of each unit depends on the entire input to the network, a unit in convolutional networks only depends on a region of the input.
    [Show full text]
  • Do We Know What the Early Visual System Does?
    The Journal of Neuroscience, November 16, 2005 • 25(46):10577–10597 • 10577 Mini-Symposium Do We Know What the Early Visual System Does? Matteo Carandini,1 Jonathan B. Demb,2 Valerio Mante,1 David J. Tolhurst,3 Yang Dan,4 Bruno A. Olshausen,6 Jack L. Gallant,5,6 and Nicole C. Rust7 1Smith-Kettlewell Eye Research Institute, San Francisco, California 94115, 2Departments of Ophthalmology and Visual Sciences, and Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, Michigan 48105, 3Department of Physiology, University of Cambridge, Cambridge CB2 1TN, United Kingdom, Departments of 4Molecular and Cellular Biology and 5Psychology and 6Helen Wills Neuroscience Institute and School of Optometry, University of California, Berkeley, Berkeley, California 94720, and 7Center for Neural Science, New York University, New York, New York 10003 We can claim that we know what the visual system does once we can predict neural responses to arbitrary stimuli, including those seen in nature. In the early visual system, models based on one or more linear receptive fields hold promise to achieve this goal as long as the models include nonlinear mechanisms that control responsiveness, based on stimulus context and history, and take into account the nonlinearity of spike generation. These linear and nonlinear mechanisms might be the only essential determinants of the response, or alternatively, there may be additional fundamental determinants yet to be identified. Research is progressing with the goals of defining a single “standard model” for each stage of the visual pathway and testing the predictive power of these models on the responses to movies of natural scenes.
    [Show full text]
  • Introduction to the Retina
    Introduction to the Retina Andrew Stockman NEUR 0017 Visual Neuroscience Optics An image of an object is focused by the cornea and lens onto the rear surface of the eye: the retina. Inverted image Accommodation: Focus on near objects by contracting Ciliary muscle and changing shape of lens. Eye and retina Retinal cells Cajal’s (1909-1911) neural architecture drawing based on the Golgi method. FOVEAL PIT Optic Tectum (superior colliculus) • Ramon y Cajal noted that neurons have anatomical polarity. • No myelination in the retina • Myelination of axons in the optic nerve Retina: a light sensitive part of the CNS OUTER LIMITING MEMBRANE OUTER NUCLEAR LAYER OUTER PLEXIFORM LAYER Light microscopic INNER NUCLEAR LAYER vertical section INNER PLEXIFORM LAYER GANGLION CELL LAYER INNER LIMITING MEMBRANE “Plexiform”: weblike or complex Retina: a light sensitive part of the CNS Schematic vertical section Light microscopic vertical section LIGHT Electrophysiological recording methods Extracellular recordings Single neurone (unit) spikes Microelectrode 500uV Flash neurone Field Potential Recording Flash Electrophysiological recording methods Whole cell (Patch Extracellular recordings clamp) recordings Single neurone (unit) spikes Microelectrode 500uV isolated whole Membrane patch cell currents Flash neurone Patch clamping can use: Field Potential Recording (1) Voltage clamp technique in which the voltage across the cell membrane is controlled by the experimenter and the resulting currents are recorded. (2) Current clamp technique in which the current
    [Show full text]
  • Spatial Properties and Functional Organization of Small Bistratified Ganglion Cells in Primate Retina
    The Journal of Neuroscience, November 28, 2007 • 27(48):13261–13272 • 13261 Behavioral/Systems/Cognitive Spatial Properties and Functional Organization of Small Bistratified Ganglion Cells in Primate Retina Greg D. Field,1* Alexander Sher,2* Jeffrey L. Gauthier,1* Martin Greschner,1 Jonathon Shlens,1 Alan M. Litke,2 and E. J. Chichilnisky1 1Salk Institute for Biological Studies, La Jolla, California 92037, and 2Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California 95064 The primate visual system consists of parallel pathways initiated by distinct cell types in the retina that encode different features of the visual scene. Small bistratified cells (SBCs), which form a major projection to the thalamus, exhibit blue-ON/yellow-OFF [S-ON/(LϩM)- OFF] light responses thought to be important for high-acuity color vision. However, the spatial processing properties of individual SBCs and their spatial arrangement across the visual field are poorly understood. The present study of peripheral primate retina reveals that contrary to previous suggestions, SBCs exhibit center-surround spatial structure, with the (LϩM)-OFF component of the receptive field ϳ50% larger in diameter than the S-ON component. Analysis of response kinetics shows that the (LϩM)-OFF response in SBCs is slower thantheS-ONresponseandsignificantlylesstransientthanthatofsimultaneouslyrecordedOFF-parasolcells.The(LϩM)-OFFresponse in SBCs was eliminated by bath application of the metabotropic glutamate receptor agonist L-APB. These observations indicate that the (LϩM)-OFF response of SBCs is not formed by OFF-bipolar cell input as has been suspected and suggest that it arises from horizontal cell feedback. Finally, the receptive fields of SBCs form orderly mosaics, with overlap and regularity similar to those of ON-parasol cells.
    [Show full text]
  • The Neural Architecture of Binocular Vision
    THE NEURAL ARCHITECTURE OF BINOCULAR VISION V. A. CASAGRANDE and 1. D. BOYD Nashville, Tennessee The brain is remarkable in its ability to match images right eyes, but also of inputs from functionally from the left and right eyes to produce a seamless distinct classes of retinal ganglion cells that exist in single image of the world. Not only are the slightly each retina. Thus, in primates there are two different monocular views of the world fused to magnocellular (M) layers, one for each eye; two or produce a single image, but also. in many frontal­ more pairs of parvocellular (P) layers; and variably eyed mammalian species, this single image provides located sets of very small cells, which we will refer to additional information about stereoscopic depth. here as the koniocellular (K) layers? What components of neural architecture are respon­ The LGN also receives visual input from a variety sible for these precepts? In this review we briefly of other sources including areas of the cortex, outline current views of the neural hardware that midbrain and brain stem? Of these sources four have been proposed to be responsible for binocular provide retinotopically organised binocular input. single vision. We will focus primarily on the visual The richest source of this input comes from VI. The system of primates but mention other mammalian binocular cells in layer VI of V1 send a heavy species where relevant. Within this framework we excitatory (aspartate) projection back to all layers of consider three issues which remain controversial. the LGN (see Fig. 1). In the cat LGN, synapses from First, do all classes of retinal ganglion cells contribute this pathway, which ends on both excitatory relay signals that are important to stereoscopic depth cells and inhibitory (GABAergic) interneurons,3 perception or only one class? Second.
    [Show full text]
  • Early Visual Processing: Receptive Fields & Retinal Processing (Chapter 2, Part 2)
    Early Visual Processing: Receptive Fields & Retinal Processing (Chapter 2, part 2) Lecture 5 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Spring 2019 1 What does the retina do? 1. Transduction this is a major, • Conversion of energy from one form to another important (i.e., “light” into “electrical energy”) concept 2. Processing • Amplification of very weak signals (1-2 photons can be detected!) • Compression of image into more compact form so that information can be efficiently sent to the brain optic nerve = “bottleneck” analogy: jpeg compression of images 2 3 Basic anatomy: photomicrograph of the retina 4 outer inner retina retinal ganglion cell bipolar cone cell optic disc (blind spot) optic nerve 5 What’s crazy about this is that the light has to pass through all the other junk in our eye before getting to photoreceptors! Cephalopods (squid, octopus): did it right. • photoreceptors in innermost layer, no blind spot! Debate: 1. accident of evolution? OR 2. better to have photoreceptors near blood supply? 6 outer inner retina RPE (retinal pigment epithelium) retinal ganglion cell bipolar cone cell optic disc (blind spot) optic nerve 7 blind spot demo 8 phototransduction: converting light to electrical signals rods cones • respond in low light • respond in daylight (“scotopic”) (“photopic”) • only one kind: don’t • 3 different kinds: process color responsible for color processing • 90M in humans • 4-5M in humans 9 phototransduction: converting light to electrical signals outer segments • packed with
    [Show full text]