Vision Sciences Society

Total Page:16

File Type:pdf, Size:1020Kb

Vision Sciences Society Vision Sciences Society Third Annual Meeting May 9-14, 2003 Sarasota, Florida Organizers Ken Nakayama Tom Sanocki Web Services Jennifer Kaltreider Shauney Wilson Program Committee Randolph Blake Michael Paradiso David Knill Tatiana Pasternak Ken Nakayama Tom Sanocki Program Review Board Tom Albright Mel Goodale Michael Paradiso Marlene Behrmann John Henderson Tatiana Pasternak Irv Biederman Nancy Kanwisher Mary Peterson Randolph Blake Phil Kellman Brian Rogers David Brainard Daniel Kersten Jeff Schall Angela Brown Dave Knill Allison Sekuler Heinrich Buelthoff Eileen Kowler Daniel Simons Patrick Cavanagh Jack Loomis Mike Tarr Marvin Chun Jitendra Malik Christopher Tyler Heiner Deubel Cathleen Moore Bill Warren Jim Enns Tony Movshon Michael Webster Isabel Gauthier Tony Norcia Sophie Wuerger Charles Gilbert Alice O'Toole Covers by Shinki Ando, Program by Shauney Wilson Table of Contents Program Summary. 1 Talk Abstracts . 7 Saturday . 7 Sunday. 20 Monday . 32 Tuesday . 43 Wednesday . 57 Poster Abstracts . 65 Friday . 67 Saturday . 101 Sunday. 137 Monday . 171 Tuesday . 205 Index . 239 Daily Poster Topic Index . 249 iii iv Friday PROGRAM SUMMARY Program Summary Friday Poster Session – Municipal Audiitorium “Something For Everyone” Authors Present: 5:00 – 8:00 pm Texture, Motion: Temporal Factors, Spatial Vision: Orientation, Clinical, Space Perception, Search, Scene Perception, Perceptual Organization, Perceptual Learning, Perception and Action, Object Recognition, Motion 1: Integration & Disorders, Lightness/Shading, Face Perception, Eye Movement Cognitive, Color, Binocular Vision: Stereo, Binocular vision: Rivalry, Attention 1 1 PROGRAM SUMMARY Saturday Saturday Attending to Scenes – North Hall Spatial Vision – South Hall What could over 1000 Internet users tell us about visual attention in natural… Adaptation to sine-wave gratings selectively reduces the sensory gain of the… 9:00 Parkhurst, Niebur 9:00 Dao, Lu, Dosher A dynamic computational model of goal-directed visual perception Origins of the nonlinearity near threshold 9:15 Hamker 9:15 Kontsevich, Tyler Top-down control of visual attention in real world scenes Noise detection: bandwidth uncertainty and adjustable channels 9:30 Oliva, Torralba, Castelhano, Henderson 9:30 Taylor, Bennett, Sekuler I WILL use the channel I want: flexible spatial scale processing External noise yields a surprise: What template? 9:45 Özgen, Sowden, Schyns 9:45 Klein, Levi Different rates of memory formation for scenes with positive and negative… Perception of ellipse orientation: data and bayesian model 10:00 Maljkovic, Martini 10:00 Dijkstra, Liu, Oomes How opiate activity may determine spontaneous visual selection STICKS: Image-representation via non-local comparisons 10:15 Vessel, Biederman, Cohen 10:15 Balas, Sinha Space Perception – North Hall Early Visual Processing – South Hall What is stored in the sensorimotor visual system: map or egocentric … Origins of size tuning in LGN neurons 10:45 Bridgeman, Dassonville, Bala, Thiem 10:45 Bonin, Mante, Carandini Lost in virtual space: estimating state uncertainty Linear and nonlinear orientation dynamics of receptive fields in cat area 17 11:00 Stankiewicz, McCabe, Kelly, Legge 11:00 Repucci, Mechler, Victor The role of effort and intention in distance perception Orientation selectivity in V1 of alert monkeys 11:15 Witt, Proffitt, Epstein 11:15 Gur, Kagan, Snodderly The effects of restricted viewing conditions on egocentric distance judgments A new quantitative analysis of simple cell space-time receptive fields 11:30 Creem-Regehr, Willemsen, Gooch, Thompson 11:30 Westover, Anderson, DeAngelis Interactions of motion parallax and ground contact in specifying distance in… Transformation of perceptual activity into saccade-related activity in the… 11:45 Ni, Braunstein, Andersen 11:45 Supèr, Spekreijse, Lamme Compression of distance judgments when viewing virtual environments… Chromatic tuning of binocular neurons in early visual cortex 12:00 Thompson, Gooch, Willemsen, Creem-Regehr, Loomis, Beall 12:00 Peirce, Solomon, Forte, Krauskopf, Lennie Visual STM – North Hall Multisensory Integration – South Hall Visual short-term memory capacity for orientations is lower for oriented… A biologically plausible model of cue combination 1:30 Alvarez, Cavanagh 1:30 Banks, Ernst Limitations of visual memory in spatial frequency discrimination Learning to fuse unrelated cues 1:45 Wright, Alston 1:45 Ernsts, Jäkel Placing objects at different depths increases visual short-term memory… Visual cortex as a site of cross-modal integration 2:00 Xu, Nakayama 2:00 Shams, Tanaka, Rees, Iwaki, Shimojo, Inui Task relevance of object features modulates the content of visual working… Recalibration of audiovisual simultaneity by adaptation to a constant time lag 2:15 Droll, Hayhoe, Triesch, Sullivan 2:15 Fujisaki, Shimojo, Kashino, Nishida Eye Movements are Cheaper than Memory: Evidence from a Scene… Kinesthetic Visual Capture Induced by Apparent Motion 2:30 Gajewski, Henderson 2:30 Somers, McNally Incidental memory in visual search: both targets and rejected distractors… Multisensory integration in self-motion 2:45 Williams, Henderson, Zacks 2:45 Sun, Campos, Chan, Zhang, Lee Attention Mechanisms – North Hall Natural Images – South Hall Competitive selection of superimposed stimuli moving through space Ecological Statistics of Grouping by Similarity 3:15 Fallah, Stoner, Reynolds 3:15 Fowlkes, Martin, Malik Macaque area V4 neurons translate the attended features of a visual… Independent component analysis of natural time-varying images under the… 3:30 mirabella, Samengo, Bertini, Kilavik, Frilli, Fanini, Chelazzi 3:30 Dastjerdi, Dong The monkey's lateral intraparietal area: parallel representations and… No suppression, only dynamic decorrelation: saccadic effects on the visual… 3:45 Gottlieb 3:45 Dong, Simpson, Weyand Toward an embedded process metatheory of selective attention Extra-classical receptive field properties: Relation to natural scene… 4:00 Luck, Vogel, Woodman, Hyun 4:00 Zetzsche, Nuding Search for motion direction: pop-out and set-size dependencies explained… Effects of image structure on perceived contrast and cortical activity in… 4:15 Burr, Veghese, Morrone, Baldassi 4:15 Olman, Ugurbil, Kersten Perisaccadic V1 activity is not due to shifting visuo-spatial attention Textural statistics and surface perception 4:30 Vallines, Bodis-Wollner, Oezyurt, Rutschmann, Greenlee 4:30 Adelson Poster Session – Municipal Audiitorium Authors Present: 5:00 – 7:00 pm Spatial Vision: Context, Contrast, Detection & Models, Receptive Fields, Perception and Action, Motion 2: Speed, Heading, Physiology, Lightness/Shading, Eye Movement Mechanisms, Color, Binocular Vision, Attention 2 2 Sunday PROGRAM SUMMARY Sunday Binocular Vision – North Hall Lighting and Shading – South Hall Depth-dependent contrast gain-control Lightness constancy in 4-month-old infants: The effect of a local luminance… 9:00 Aslin, Jacobs, Battaglia 9:00 Chien, Bronson-Castain Motion prolongs perceptual dominance during binocular rivalry The highest luminance anchoring rule in lightness perception: A… 9:15 Blake, Sobel 9:15 Rudd, Zemach Traveling waves of activity in V1 correlate with perceptual dominance during… fMRI of brightness perception 9:30 Lee, Blake, Heeger 9:30 Cornelissen, Wade, Dougherty, Wandell A dynamical hierarchy of rivalry stages in vision Layered image representations and the perception of lightness 9:45 Wilson 9:45 Anderson, Winawer Fractal statistics of perceptual switching time series. How does the perception of shape interact with the perception of shiny… 10:00 White, Gao, Zhou 10:00 Hartung, Kersten Stochastic resonance in bistable binocular rivalry. Blackshot: An unexpected dimension of human sensitivity to contrast 10:15 Kim, Grabowecky, Suzuki 10:15 Landy, Chubb, Econopouly Stereo – North Hall Eye Movements - Cognitive – South Hall Sensitivity to interocular delay in binocular V1 neurons Flashing scenes and moving windows: An effect of initial scene gist on eye… 10:45 Cumming, Read 10:45 Castelhano, Henderson Reduced binocular disparity selectivity of V4 neurons to anti-correlated… Visual search: Gaze contingent displays and optimal search strategies 11:00 Tanabe, Fujita 11:00 Geisler, Perry, Najemnik The cost of resolving stereo ambiguity The accumulation of visual information driving the 1st saccade during visual… 11:15 McKee, Farell, Verghese 11:15 Caspi, Beutter, Eckstein The effect of absolute and relative disparity noise on stereoacuity Target tagging in visual search 11:30 Foulkes, Parker 11:30 Körner, Gilchrist Disparity gradient between the target and its surroundings defines depth… Dynamic allocation of visual attention during the execution of sequences of… 11:45 Petrov, Glennerster 11:45 Gersch, Kowler, Dosher Focus cues to display distance affect perceived depth from disparity Saccadic decision-rate distributions reveal competition process 12:00 Watt, Akeley, Banks 12:00 Beintema, Van Loon, Hooge, Van den Berg Poster Session – Municipal Audiitorium Authors Present: 1:30 – 3:30 pm Surfaces, Space Perception, Object Recognition, Natural Images, Motion 3: Low-level, Time, Biological, Locomotion, Attention 3, Adaption/Aftereffects Shape and Depth – North Hall Biological Motion – South Hall How image statistics drive shape-from-texture and shape-from-specularity Brain activity reflects perceptual learning of point-light biological motion 3:30 Fleming, Torralba, Dror, Adelson 3:30 Grossman, Kim, Blake Can we see the shape of a mirror? Biological motion perception is impaired
Recommended publications
  • Keith A. Schneider Orcid.Org/0000-0001-7120-3380 [email protected] | (302) 300–7043 77 E
    Keith A. Schneider orcid.org/0000-0001-7120-3380 [email protected] | (302) 300–7043 77 E. Delaware Ave., Room 233, Newark, DE 19716 Education PhD Brain & Cognitive Sciences, University of Rochester, Rochester, NY 2002 MA Brain & Cognitive Sciences, University of Rochester, Rochester, NY 2000 MA Astronomy, Boston University, Boston, MA 1996 BS Physics, California Institute of Technology, Pasadena, CA 1994 Positions Professor, Department of Psychological & Brain Sciences, University of 2020– Delaware, Newark, DE Director, Center for Biomedical & Brain Imaging, University of Delaware, 2016– Newark DE Associate Professor, Department of Psychological & Brain Sciences, 2016–2020 University of Delaware, Newark, DE Director, York MRI Facility, York University, Toronto, ON, Canada 2010–2016 Associate Professor, Department of Biology and Centre for Vision 2010–2016 Research, York University, Toronto, ON, Canada Assistant Professor, Department of Psychological Sciences, University of 2009–2010 Missouri, Columbia, MO Assistant Professor (Research), Rochester Center for Brain Imaging and 2005–2008 Center for Visual Science, University of Rochester, Rochester, NY Postdoctoral Fellow, Princeton University, Princeton, NJ 2002–2005 Postdoctoral Fellow, University of Rochester, Rochester, NY 2002 Funding NIH/NEI 1R01EY028266. 2018–2023. “Directly testing the magnocellular hypothesis of dyslexia”. $1,912,669. PI. NIH COBRE 2P20GM103653-06. 2017–2022. “Renewal of Delaware Center for Neuroscience Research”. $3,372,118. Co-PI (Core Director). NVIDIA GPU Grant. 2016. Titan X Pascal hardware. $1200. PI. NSERC Discovery Grant (Canada). 2012–2016. “Structural and functional imaging of the human thalamus”. $135,000. PI. Schneider 1 NSERC CREATE (Canada). 2011–2016. “Vision Science and Applications”. $1,650,000. One of nine co-PIs.
    [Show full text]
  • Prediction and Postdiction
    BEHAVIORAL AND BRAIN SCIENCES (2008) 31, 179–239 Printed in the United States of America doi: 10.1017/S0140525X08003804 Visual prediction: Psychophysics and neurophysiology of compensation for time delays Romi Nijhawan Department of Psychology, University of Sussex, Falmer, East Sussex, BN1 9QH, United Kingdom [email protected] http://www.sussex.ac.uk/psychology/profile116415.html Abstract: A necessary consequence of the nature of neural transmission systems is that as change in the physical state of a time-varying event takes place, delays produce error between the instantaneous registered state and the external state. Another source of delay is the transmission of internal motor commands to muscles and the inertia of the musculoskeletal system. How does the central nervous system compensate for these pervasive delays? Although it has been argued that delay compensation occurs late in the motor planning stages, even the earliest visual processes, such as phototransduction, contribute significantly to delays. I argue that compensation is not an exclusive property of the motor system, but rather, is a pervasive feature of the central nervous system (CNS) organization. Although the motor planning system may contain a highly flexible compensation mechanism, accounting not just for delays but also variability in delays (e.g., those resulting from variations in luminance contrast, internal body temperature, muscle fatigue, etc.), visual mechanisms also contribute to compensation. Previous suggestions of this notion of “visual prediction” led to a lively debate producing re-examination of previous arguments, new analyses, and review of the experiments presented here. Understanding visual prediction will inform our theories of sensory processes and visual perception, and will impact our notion of visual awareness.
    [Show full text]
  • The Flash-Lag Effect and Related Mislocalizations: Findings, Properties, and Theories
    Psychological Bulletin © 2013 American Psychological Association 2014, Vol. 140, No. 1, 308–338 0033-2909/14/$12.00 DOI: 10.1037/a0032899 The Flash-Lag Effect and Related Mislocalizations: Findings, Properties, and Theories Timothy L. Hubbard Texas Christian University If an observer sees a flashed (briefly presented) object that is aligned with a moving target, the perceived position of the flashed object usually lags the perceived position of the moving target. This has been referred to as the flash-lag effect, and the flash-lag effect has been suggested to reflect how an observer compensates for delays in perception that are due to neural processing times and is thus able to interact with dynamic stimuli in real time. Characteristics of the stimulus and of the observer that influence the flash-lag effect are reviewed, and the sensitivity or robustness of the flash-lag effect to numerous variables is discussed. Properties of the flash-lag effect and how the flash-lag effect might be related to several other perceptual and cognitive processes and phenomena are considered. Unresolved empirical issues are noted. Theories of the flash-lag effect are reviewed, and evidence inconsistent with each theory is noted. The flash-lag effect appears to involve low-level perceptual processes and high-level cognitive processes, reflects the operation of multiple mechanisms, occurs in numerous stimulus dimensions, and occurs within and across multiple modalities. It is suggested that the flash-lag effect derives from more basic mislocalizations of the moving target or flashed object and that understanding and analysis of the flash-lag effect should focus on these more basic mislocalizations rather than on the relationship between the moving target and the flashed object.
    [Show full text]
  • Shimojo's C.V
    Curriculum Vitae of Shinsuke Shimojo October 2, 2020 Name: Shinsuke Shimojo, Ph.D. Address: 837 San Rafael Terrace, Pasadena, CA 91105 Phone: (626) 403-7417 (home) (626) 395-3324 (office) Fax: (626) 792-8583 (office) E-Mail: [email protected] Birthplace: Tokyo Birthdate: April 1, 1955 Nationality: Japanese Education: Degree Year Field of Study University of Tokyo B.A. 1978 University of Tokyo M.A. 1980 Experimental Psychology Massachusetts Institute Ph.D. 1985 Experimental of Technology Psychology Professional Experience: 1981-1982 Visiting Scholar, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1982-1983 Research Affiliate, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1983-1985 Teaching Assistant, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1986-1987 Postdoctoral Fellow, Department of Ophthalmology, Nagoya University, Nagoya, Japan. 1986-1989 Postdoctoral Fellow Smith-Kettlewell Eye Research Institute, San Francisco, CA. 1989-1997 Associate Professor, Department of Psychology / Department of Life Sciences (Psychology), Graduate School of Arts & Sciences, University of Tokyo, Tokyo, Japan. 1989-1993 Fellow, Department of Psychology, Harvard University, Cambridge, MA. Shimojo, page 2 of 55 Professional Experience (con’t.): 1993-1994 Visiting Scientist, Dept. of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA. 1997-1998 Associate Professor, Division of Biology / Computation & Neural Systems, California
    [Show full text]
  • Poster Session 1
    Part IX Poster Session 1 99 SCALING DEPRESSION WITH PSYCHOPHYSICAL SCALING: A R COMPARISON BETWEEN THE BORG CR SCALE⃝ (CR100, R CENTIMAX⃝) AND PHQ-9 ON A NON-CLINICAL SAMPLE Elisabet Borg and Jessica Sundell Department of Psychology, Stockholm University, SE-10691 Stockholm, Sweden <[email protected]> Abstract A non-clinical sample (n =71) answered an online survey containing the Patient Health Questionnaire-9 (PHQ-9), that rates the frequency of symptoms of depression (DSM-IV). R R The questions were also adapted for the Borg CR Scale⃝ (CR100, centiMax⃝) (0–100), a general intensity scale with verbal anchors from a minimal to a maximal intensity placed in agreement with a numerical scale to give ratio data). The cut-off score of PHQ-9 10 was found to correspond to 29cM. Cronbach’s alpha for both scales was high (0.87)≥ and ≥ the correlation between the scales was r =0.78 (rs =0.69). Despite restriction of range, the cM- scale works well for scaling depression with added possibilities for valuable data analysis. According to the World Health Organisation (2016), depression is a growing health prob- lem around the globe. Since it is one of the most prevalent emotional disorders, can cause considerable impairment in an individual’s capacity to deal with his or her regular duties, and at its worst is a major risk factor for suicide attempt and suicide, correct and early diagnosis and treatment is very important (e.g., APA, 2013, Ferrari et al., 2013; Richards, 2011). Major depressive disorder (MDD) is characterized by distinct episodes of at least two weeks duration involving changes in affects, in cognition and neurovegetative func- tions, and described as being characterized by sadness, loss of pleasure or interest, fatigue, low self- esteem and self-blame, disturbed appetite and sleep, in addition to poor concen- tration (DSM- V, APA, 2013; WHO, 2016).
    [Show full text]
  • Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate
    5698 • The Journal of Neuroscience, July 22, 2020 • 40(30):5698–5705 Viewpoints Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate Hinze Hogendoorn Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, 3010 Victoria, Australia Because of the delays inherent in neural transmission, the brain needs time to process incoming visual information. If these delays were not somehow compensated, we would consistently mislocalize moving objects behind their physical positions. Twenty-five years ago, Nijhawan used a perceptual illusion he called the flash-lag effect (FLE) to argue that the brain’s visual system solves this computational challenge by extrapolating the position of moving objects (Nijhawan, 1994). Although motion extrapolation had been proposed a decade earlier (e.g., Finke et al., 1986), the proposal that it caused the FLE and functioned to compensate for computational delays was hotly debated in the years that followed, with several alternative interpretations put forth to explain the effect. Here, I argue, 25 years later, that evidence from behavioral, computational, and particularly recent functional neuroimaging studies converges to support the existence of motion extrapolation mecha- nisms in the visual system, as well as their causal involvement in the FLE. First, findings that were initially argued to chal- lenge the motion extrapolation model of the FLE have since been explained, and those explanations have been tested and corroborated by more recent findings. Second, motion extrapolation explains the spatial shifts observed in several FLE condi- tions that cannot be explained by alternative (temporal) models of the FLE. Finally, neural mechanisms that actually perform motion extrapolation have been identified at multiple levels of the visual system, in multiple species, and with multiple dif- ferent methods.
    [Show full text]
  • 54 Motion Perception: Human Psychophysics
    PROPERTY OF MIT PRESS: FOR PROOFREADING AND INDEXING PURPOSES ONLY 54 Motion Perception: Human Psychophysics David Burr Perceiving motion is a fundamental skill of any visual are combined by “quadrature pairing,” a technique that system: to analyze the form and velocity of moving produces direction selectivity (C). The outputs of the objects; to avoid collision with moving masses; to navi- two classes of filters (sine and cosine) are then squared gate through our environment; to analyze the three- and summed together (C), to yield a smooth response, dimensional structure of the world we move through; selective for direction and also weakly selective for and much more. Much progress has been made over speed. Figure 54.1D shows the resulting frequency the past few decades to learn how humans and other response, clearly selective to a specific range of animals analyze visual signals of objects in motion. This velocities. chapter concentrates primarily on advances in human This model describes perception of real motion and psychophysics. For an excellent review of imaging also accounts for many other phenomena, including studies of human and nonhuman primates—and the apparent or sampled motion, previously thought to reflect homologies between them—the interested reader is separate processes (e.g., Kolers, 1972): The integration referred to the excellent chapter (chapter 55) by Orban in space and in time causes the discrete motion sequence and Jastorff. to become continuous. It also explains some motion Over the last few decades many important conceptual illusions, including the “fluted square wave” illusion and empirical advances have enormously expanded our (Adelson & Bergen, 1985) and the reverse-phi illusion understanding of the principles behind motion percep- of Anstis (1970).
    [Show full text]
  • When Predictions Fail: Correction for Extrapolation in the Flash-Grab Effect
    Journal of Vision (2019) 19(2):3, 1–11 1 When predictions fail: Correction for extrapolation in the flash-grab effect Melbourne School of Psychological Sciences, Tessel Blom University of Melbourne, Melbourne, Australia $ Melbourne School of Psychological Sciences, Qianchen Liang University of Melbourne, Melbourne, Australia $ Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia Helmholtz Institute, Department of Experimental # Hinze Hogendoorn Psychology, Utrecht University, Utrecht, The Netherlands $ Motion-induced position shifts constitute a broad class of visual illusions in which motion and position signals Introduction interact in the human visual pathway. In such illusions, the presence of visual motion distorts the perceived A broad class of visual illusions demonstrate that positions of objects in nearby space. Predictive motion and position signals interact in the human mechanisms, which could contribute to compensating visual pathway. In such illusions, the presence of visual for processing delays due to neural transmission, have motion distorts the perceived positions of objects in been given as an explanation. However, such nearby space, causing motion-induced position shifts. mechanisms have struggled to explain why we do not The most-studied illusion in this category is probably usually perceive objects extrapolated beyond the end of their trajectory. Advocates of this interpretation have the flash-lag effect (Nijhawan, 1994), in which a proposed a ‘‘correction-for-extrapolation’’ mechanism to stimulus is flashed next to (and aligned with) a moving explain this: When the object motion ends abruptly, this object. The result is that the flash appears to lag behind mechanism corrects the overextrapolation by shifting the position of the moving object.
    [Show full text]
  • Predictive Feedback to the Primary Visual Cortex During Saccades
    Edwards, Grace (2014) Predictive feedback to the primary visual cortex during saccades. PhD thesis. http://theses.gla.ac.uk/5861/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] Predictive feedback to the primary visual cortex during saccades Grace Edwards THESIS SUBMITTED TO THE UNIVERSITY OF GLASGOW FOR THE DEGREE OF DOCTOR OF PHILOSOPHY OCTOBER 2014 Contains studies performed in the: Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, Glasgow, G12 8QB © Grace Edwards 2014 2 For Granddad 3 Abstract Perception of our sensory environment is actively constructed from sensory input and prior expectations. These expectations are created from knowledge of the world through semantic memories, spatial and temporal contexts, and learning. Multiple frameworks have been created to conceptualise this active perception, these frameworks will be further referred to as inference models. There are three elements of inference models which have prevailed in these frameworks. Firstly, the presence of internal generative models for the visual environment, secondly feedback connections which project prediction signals of the model to lower cortical processing areas to interact with sensory input, and thirdly prediction errors which are produced when the sensory input is not predicted by feedback signals.
    [Show full text]
  • Neuronal Latencies and the Position of Moving Objects
    Review TRENDS in Neurosciences Vol.24 No.6 June 2001 335 Neuronal latencies and the position of moving objects Bart Krekelberg and Markus Lappe Neuronal latencies delay the registration of the visual signal from a moving flashed object, either because it does not move, or object. By the time the visual input reaches brain structures that encode its because its duration is too short to initiate position, the object has already moved on. Do we perceive the position of a extrapolation mechanisms. Hence, the flash is seen at moving object with a delay because of neuronal latencies? Or is there a brain its true position, but the moving object is seen at an mechanism that compensates for latencies such that we perceive the true extrapolated position (Fig. 2b). This extrapolated position of a moving object in real time? This question has been intensely position is the position the moving object occupies debated in the context of the flash-lag illusion: a moving object and an object ‘now’, not the position it occupied when the flash hit flashed in alignment with it appear to occupy different positions. The moving the retina: hence, the flash will appear to lag behind. object is seen ahead of the flash. Does this show that the visual system If motion extrapolation compensates for neuronal extrapolates the position of moving objects into the future to compensate for latencies, the magnitude of the lag-effect should neuronal latencies? Alternative accounts propose that it simply shows that depend on the actual latency of the moving object. moving and flashed objects are processed with different delays, or that it Hence, a crucial test manipulates the latency of the reflects temporal integration in brain areas that encode position and motion.
    [Show full text]
  • Smooth Anticipatory Eye Movements Alter the Memorized Position of Flashed Targets
    Journal of Vision (2003) 3, 761-770 http://journalofvision.org/3/11/10/ 761 Smooth anticipatory eye movements alter the memorized position of flashed targets Center for Systems Engineering and Applied Mechanics and Laboratory of Neurophysiology, Université catholique Gunnar Blohm de Louvain, Louvain-la-Neuve, Belgium Laboratory of Neurophysiology, Université catholique de Marcus Missal Louvain, Brussels, Belgium Center for Systems Engineering and Applied Mechanics and Laboratory of Neurophysiology, Université catholique Philippe Lefèvre de Louvain, Louvain-la-Neuve, Belgium Briefly flashed visual stimuli presented during smooth object- or self-motion are systematically mislocalized. This phenomenon is called the “flash-lag effect” (Nijhawan, 1994). All previous studies had one common characteristic, the subject’s sense of motion. Here we asked whether motion perception is a necessary condition for the flash-lag effect to occur. In our first experiment, we briefly flashed a target during smooth anticipatory eye movements in darkness and subjects had to orient their gaze toward the perceived flash position. Subjects reported to have no sense of eye motion during anticipatory movements. In our second experiment, subjects had to adjust a cursor on the perceived position of the flash. As a result, we show that gaze orientation reflects the actual perceived flash position. Furthermore, a flash-lag effect is present despite the absence of motion perception. Moreover, the time course of gaze orientation shows that the flash- lag effect appeared immediately after the egocentric to allocentric reference frame transformation. Keywords: anticipation, smooth pursuit, saccades, localization, flash perception, flash-lag effect One condition in which localization errors happen is Introduction during smooth object- or self-motion.
    [Show full text]
  • Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate
    5698 • The Journal of Neuroscience, July 22, 2020 • 40(30):5698–5705 Viewpoints Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate Hinze Hogendoorn Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, 3010 Victoria, Australia Because of the delays inherent in neural transmission, the brain needs time to process incoming visual information. If these delays were not somehow compensated, we would consistently mislocalize moving objects behind their physical positions. Twenty-five years ago, Nijhawan used a perceptual illusion he called the flash-lag effect (FLE) to argue that the brain’s visual system solves this computational challenge by extrapolating the position of moving objects (Nijhawan, 1994). Although motion extrapolation had been proposed a decade earlier (e.g., Finke et al., 1986), the proposal that it caused the FLE and functioned to compensate for computational delays was hotly debated in the years that followed, with several alternative interpretations put forth to explain the effect. Here, I argue, 25 years later, that evidence from behavioral, computational, and particularly recent functional neuroimaging studies converges to support the existence of motion extrapolation mecha- nisms in the visual system, as well as their causal involvement in the FLE. First, findings that were initially argued to chal- lenge the motion extrapolation model of the FLE have since been explained, and those explanations have been tested and corroborated by more recent findings. Second, motion extrapolation explains the spatial shifts observed in several FLE condi- tions that cannot be explained by alternative (temporal) models of the FLE. Finally, neural mechanisms that actually perform motion extrapolation have been identified at multiple levels of the visual system, in multiple species, and with multiple dif- ferent methods.
    [Show full text]