The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory

Total Page:16

File Type:pdf, Size:1020Kb

The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory Journal of Experimental Psychology: Copyright 2005 by the American Psychological Association Learning, Memory, and Cognition 0278-7393/05/$12.00 DOI: 10.1037/0278-7393.31.3.396 2005, Vol. 31, No. 3, 396–411 The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory Andrew Hollingworth The University of Iowa In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust. Human beings spend their lives within complex environments— These picture memory studies show that some form of repre- offices, parks, living rooms—that typically contain scores of indi- sentation is retained robustly from many individual pictures after a vidual objects. All of the visual detail within a scene cannot be single trial of study, and this information is sufficient to discrim- perceived in a single glance, as high acuity vision is limited to a inate studied pictures from pictures that were not studied. The relatively small, foveal region of the visual field (Riggs, 1965), so distractors used in these experiments, however, were typically the eyes and attention are oriented serially to local scene regions to chosen to be highly different from studied images, making it obtain high resolution, foveal information from individual objects difficult to identify the type of information supporting such feats of (see Henderson & Hollingworth, 1998, for a review). Visual scene memory. Participants might have remembered studied pictures by perception is therefore extended over time and space as the eyes maintaining visual representations (coding visual properties such and attention are oriented from object to object. To construct a as shape, color, orientation, texture, and so on), by maintaining visual representation of a scene as a whole, visual information conceptual representations of picture identity (i.e., scene gist, such from previously fixated and attended objects must be retained in as bedroom), or by maintaining verbal descriptions of picture memory and integrated with information from subsequently at- content. It is possible that these latter types of nonvisual represen- tended objects. tation, and not visual memory, accounted for observations of Once constructed during viewing, how robustly are visual rep- high-capacity, robust picture memory (Chun, 2003; Simons, 1996; resentations of scenes retained in memory? Early evidence from Simons & Levin, 1997). the picture memory literature suggested that visual memory for Long-term memory (LTM) for the specific visual properties of scenes exhibits exceedingly large capacity and is highly robust objects in scenes was observed in a set of experiments examining (Nickerson, 1965, 1968; Shepard, 1967; Standing, 1973; Standing, the influence of scene contextual structure on object memory Conezio, & Haber, 1970). Shepard (1967) presented participants (Friedman, 1979; Mandler & Johnson, 1976; Mandler & Parker, with 612 photographs of varying subject matter for 6 s each. 1976; Mandler & Ritchey, 1977; Parker, 1978). Participants reli- Immediately after the study session, mean performance on a two- ably remembered token-specific details of individual objects alternative recognition test was essentially perfect (98% correct). within line drawings of natural scenes. However, the scenes were Even more impressive capacity was observed by Standing (1973), very simple, typically consisting of a handful of discrete objects; who presented each participant with 10,000 different photographs scene items were repeated many times; and verbal encoding was for 5 s each. On a two-alterative recognition test, mean percentage not controlled. Thus, memory for visual object detail could easily correct was 83%, suggesting the retention of approximately 6,600 have been based on verbal descriptions generated for a small pictures. In addition to such demonstrations of capacity, robust number of objects during the many repetitions of each scene. retention of pictorial stimuli has been observed over long dura- Subsequent work has questioned whether LTM reliably retains tions, with above chance picture recognition after a delay of almost the visual details of complex stimuli. In a frequently cited study, 1 year (Nickerson, 1968). Nickerson and Adams (1979) tested memory for the visual form of a common object, the penny. Participants were shown a line drawing of the “head” side of a correct penny among 14 alterna- This research was supported by National Institute of Health Grant R03 tives that differed from the correct penny in the surface details. MH65456. Correspondence concerning this article should be addressed to Andrew Forty-two percent of participants chose the correct alternative, Hollingworth, Department of Psychology, The University of Iowa, 11 leading Nickerson and Adams to conclude that memory for pen- Seashore Hall E, Iowa City, IA 52242-1407. E-mail: andrew-hollingworth@ nies is largely schematic, retaining little visual detail. This con- uiowa.edu clusion must be qualified in two respects, however. First, memory 396 VISUAL MEMORY AND SCENE REPRESENTATION 397 encoding was not controlled; the authors assumed that pennies ment. Theories claiming that memory for visual detail is either were familiar from everyday experience. Although pennies might nonexistent (O’Regan, 1992; O’Regan & No¨e, 2001) or limited to be handled quite often, most people rarely attend to the surface the currently attended object (Rensink, 2000, 2002) predict that details of a penny. Second, although fewer than half the partici- such changes should be undetectable. Yet, participants detected pants (42%) chose the correct penny on the recognition test, the these subtle changes to previously attended objects at a rate sig- correct alternative was chosen at a rate well above chance of 6.7%, nificantly above the false-alarm rate, demonstrating that visual and participants were more than twice as likely to choose the representations do not necessarily disintegrate upon the with- correct penny than to choose any particular distractor. Further, the drawal of attention (see also Hollingworth, Williams, & Hender- distractors were highly similar to the correct alternative, often son, 2001). Subsequent work has demonstrated that verbal encod- differing in only a single surface feature, such as the position of a ing could not account for accurate memory for the visual details of text element. Thus, one could just as validly interpret the Nicker- previously attended objects, as change detection performance re- son and Adams results as demonstrating relatively accurate mem- mained accurate in the presence of a verbal working memory load ory for the perceptual details of a common object. and articulatory suppression (Hollingworth, 2003, 2004) More recently, the phenomenon of change blindness has been In addition to testing object memory during online scene view- used to argue that scene representations in memory are primarily ing, Hollingworth and Henderson (2002) examined longer-term schematic and gist-based, retaining little, if any, specific visual retention of visual object representations. For scenes in which the information (Becker & Pashler, 2002; Irwin & Andrews, 1996; target was not changed during initial viewing, a delayed, two- O’Regan, 1992; O’Regan & No¨e, 2001; O’Regan, Rensink, & alternative forced-choice test was administered after all scenes had Clark, 1999; Rensink, 2000; Rensink, O’Regan, & Clark, 1997; been viewed, introducing a delay of approximately 15–20 min, on Simons, 1996; Simons & Levin, 1997; Wolfe, 1999). In change average, between scene viewing and test. Participants were sur- blindness studies, a change is introduced into a scene during some prisingly accurate on this delayed test, performing above 80% form of visual disruption, such as a saccadic eye movement (e.g., correct both when discriminating targets from rotated distractors Grimes, 1996; Henderson & Hollingworth, 1999, 2003b) or brief and when discriminating targets from different token distractors. interstimulus interval (ISI; e.g., Rensink et al., 1997). Poor change These data provide preliminary evidence that memory for the detection performance in these paradigms has led researchers to visual details of individual objects in scenes is robust not only propose that a memory representation of the visual detail in a scene during the online viewing of a scene but also over time scales is either nonexistent (O’Regan, 1992; O’Regan & No¨e, 2001), directly implicating LTM, as tested in the picture memory limited to the currently attended object (Rensink, 2000, 2002), or literature. limited to the small number of objects (three or four) that can be To account for these and concomitant results, Hollingworth and maintained in visual
Recommended publications
  • Functions of Memory Across Saccadic Eye Movements
    Functions of Memory Across Saccadic Eye Movements David Aagten-Murphy and Paul M. Bays Contents 1 Introduction 2 Transsaccadic Memory 2.1 Other Memory Representations 2.2 Role of VWM Across Eye Movements 3 Identifying Changes in the Environment 3.1 Detection of Displacements 3.2 Object Continuity 3.3 Visual Landmarks 3.4 Conclusion 4 Integrating Information Across Saccades 4.1 Transsaccadic Fusion 4.2 Transsaccadic Comparison and Preview Effects 4.3 Transsaccadic Integration 4.4 Conclusion 5 Correcting Eye Movement Errors 5.1 Corrective Saccades 5.2 Saccadic Adaptation 5.3 Conclusion 6 Discussion 6.1 Optimality 6.2 Object Correspondence 6.3 Memory Limitations References Abstract Several times per second, humans make rapid eye movements called saccades which redirect their gaze to sample new regions of external space. Saccades present unique challenges to both perceptual and motor systems. During the move- ment, the visual input is smeared across the retina and severely degraded. Once completed, the projection of the world onto the retina has undergone a large-scale spatial transformation. The vector of this transformation, and the new orientation of D. Aagten-Murphy (*) and P. M. Bays University of Cambridge, Cambridge, UK e-mail: [email protected] © Springer Nature Switzerland AG 2018 Curr Topics Behav Neurosci DOI 10.1007/7854_2018_66 D. Aagten-Murphy and P. M. Bays the eye in the external world, is uncertain. Memory for the pre-saccadic visual input is thought to play a central role in compensating for the disruption caused by saccades. Here, we review evidence that memory contributes to (1) detecting and identifying changes in the world that occur during a saccade, (2) bridging the gap in input so that visual processing does not have to start anew, and (3) correcting saccade errors and recalibrating the oculomotor system to ensure accuracy of future saccades.
    [Show full text]
  • Binding of Visual Features in Human Perception and Memory
    BINDING OF VISUAL FEATURES IN HUMAN PERCEPTION AND MEMORY SNEHLATA JASWAL Thesis presented for the degree of Doctor of Philosophy The University of Edinburgh 2009 Dedicated to my parents PhD – The University of Edinburgh – 2009 DECLARATION I declare that this thesis is my own composition, and that the material contained in it describes my own work. It has not been submitted for any other degree or professional qualification. All quotations have been distinguished by quotation marks and the sources of information acknowledged. Snehlata Jaswal 31 August 2009 PhD – The University of Edinburgh – 2009 ACKNOWLEDGEMENTS Adamant queries are a hazy recollection Remnants are clarity, insight, appreciation A deep admiration, love, infinite gratitude For revered Bob, John, and Jim One a cherished guide, ideal teacher The other a personal idol, surrogate mother Both inspiring, lovingly instilled certitude Jagat Sir and Ritu Ma’am A sister sought new worlds to conquer A daughter left, enticed by enchanter Yet showered me with blessings multitude My family, Mum and Dad So many more, whispers the breeze Without whom, this would not be Mere mention is an inane platitude For treasured friends forever PhD – The University of Edinburgh – 2009 1 CONTENTS ABSTRACT.......................................................................................................................................... 5 LIST OF FIGURES ............................................................................................................................. 6 LIST OF ABBREVIATIONS.............................................................................................................
    [Show full text]
  • Visual Scene Perception
    Galley: Article - 00632 Level 1 Visual Scene Perception Helene Intraub, University of Delaware, Newark, Delaware, USA CONTENTS Introduction Transsaccadic memory Scene comprehension and its influence on object Perception of `gist' versus details detection Boundary extension When studying a scene, viewers can make as many a single glimpse without any bias from prior con- as three or four eye fixations per second. A single text. Immediately after viewing a rapid sequence fixation is typically enough to allow comprehension like this, viewers participate in a recognition test. of a scene. What is remembered, however, is not a The pictures they just viewed are mixed with new photographic replica, but a more abstract mental pictures and are slowly presented one at a time. representation that captures the `gist' and general The viewers' task is to indicate which pictures they layout of the scene along with a limited amount of detail. remember seeing before. Following rapid presenta- tion, memory for the previously seen pictures is rather poor. For example, when 16 pictures were INTRODUCTION presented in this way, moments later, viewers were only able to recognize about 40 to 50 per 0632.001 The visual world exists all around us, but physio- logical constraints prevent us from seeing it all at cent in the memory test. One explanation is that once. Eye movements (called saccades) shift the they were only able to perceive about 40 to 50 per position of the eye as quickly as three or four cent of the scenes that had flashed by and those times per second. Vision is suppressed during were the scenes they remembered.
    [Show full text]
  • Retinotopic Memory Is More Precise Than Spatiotopic Memory
    Retinotopic memory is more precise than spatiotopic memory Julie D. Golomb1 and Nancy Kanwisher McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 Edited by Tony Movshon, New York University, New York, NY, and approved December 14, 2011 (received for review August 10, 2011) Successful visually guided behavior requires information about spa- possible that stability could be achieved on the basis of retinotopic- tiotopic (i.e., world-centered) locations, but how accurately is this plus-updating alone. information actually derived from initial retinotopic (i.e., eye-centered) Most current theories of visual stability favor some form of visual input? We conducted a spatial working memory task in which retinotopic-plus-updating, but they vary widely in how quickly, subjects remembered a cued location in spatiotopic or retinotopic automatically, or successfully this updating might occur. For coordinates while making guided eye movements during the memory example, several groups have demonstrated that locations can be delay. Surprisingly, after a saccade, subjects were significantly more rapidly “remapped,” sometimes even in anticipation of an eye accurate and precise at reporting retinotopic locations than spatiotopic movement (20–22), but a recent set of studies has argued that locations. This difference grew with each eye movement, such that updating requires not only remapping to the new location, but spatiotopic memory continued to deteriorate, whereas retinotopic also extinguishing the representation at the previous location, memory did not accumulate error. The loss in spatiotopic fidelity and this latter process may occur on a slower time scale (23, 24). is therefore not a generic consequence of eye movements, but a Moreover, it is an open question just how accurate we are at direct result of converting visual information from native retinotopic spatiotopic perception.
    [Show full text]
  • Copyright 2013 Jeewon Ahn
    Copyright 2013 JeeWon Ahn THE CONTRIBUTION OF VISUAL WORKING MEMORY TO PRIMING OF POP-OUT BY JEEWON AHN DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Psychology in the Graduate College of the University of Illinois at Urbana-Champaign, 2013 Urbana, Illinois Doctoral Committee: Associate Professor Alejandro Lleras, Chair Professor John E. Hummel Professor Arthur F. Kramer Professor Daniel J. Simons Associate Professor Diane M. Beck ABSTRACT Priming of pop-out (PoP) refers to the facilitation in performance that occurs when a target- defining feature is repeated across consecutive trials in a pop-out singleton search task. While the underlying mechanism of PoP has been at the center of debate, a recent finding (Lee, Mozer, & Vecera, 2009) has suggested that PoP relies on the change of feature gain modulation, essentially eliminating the role of memory representation as an explanation for the underlying mechanism of PoP. The current study aimed to test this proposition to determine whether PoP is truly independent of guidance based on visual working memory (VWM) by adopting a dual-task paradigm composed of a variety of both pop-out search and VWM tasks. First, Experiment 1 tested whether the type of information represented in VWM mattered in the interaction between PoP and the VWM task. Experiment 1A aimed to replicate the previous finding, adopting a design almost identical to that of Lee et al., including a VWM task to memorize non-spatial features. Experiment 1B tested a different type of VWM task involving remembering spatial locations instead of non-spatial colors.
    [Show full text]
  • Scene and Position Specificity in Visual Memory for Objects
    Journal of Experimental Psychology: Copyright 2006 by the American Psychological Association Learning, Memory, and Cognition 0278-7393/06/$12.00 DOI: 10.1037/0278-7393.32.1.58 2006, Vol. 32, No. 1, 58–69 Scene and Position Specificity in Visual Memory for Objects Andrew Hollingworth University of Iowa This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position. Keywords: visual memory, scene perception, context effects, object recognition Humans spend most of their lives within complex visual envi- that work on scene perception and memory often assumes the ronments, yet relatively little is known about how natural scenes existence of scene-level representations (e.g., Hollingworth & are visually represented in the brain. One of the central issues in Henderson, 2002).
    [Show full text]
  • Visual Memory Capacity in Transsaccadic Integration
    Exp Brain Res (2007) 180:609–628 DOI 10.1007/s00221-007-0885-4 RESEARCH ARTICLE Visual memory capacity in transsaccadic integration Steven L. Prime Æ Lia Tsotsos Æ Gerald P. Keith Æ J. Douglas Crawford Received: 14 June 2006 / Accepted: 9 January 2007 / Published online: 16 February 2007 Ó Springer-Verlag 2007 Abstract How we perceive the visual world as stable by inputting a noisy extra-retinal signal into an eye- and unified suggests the existence of transsaccadic centered feature map. Our results suggest that trans- integration that retains and integrates visual informa- saccadic memory has a similar capacity for storing tion from one eye fixation to another eye fixation simple visual features as basic visual memory, but this across saccadic eye movements. However, the capacity capacity is dependent both on the metrics of the sac- of transsaccadic integration is still a subject of con- cade and allocation of attention. troversy. We tested our subjects’ memory capacity of two basic visual features, i.e. luminance (Experiment 1) Keywords Visual perception Á Saccades Á Visual and orientation (Experiment 2), both within a single working memory fixation (i.e. visual working memory) and between separate fixations (i.e. transsaccadic memory). Exper- iment 2 was repeated, but attention allocation was Introduction manipulated using attentional cues at either the target or distracter (Experiment 3). Subjects were able to A central question in cognitive neuroscience is how we retain 3–4 objects in transsaccadic memory for lumi- perceive a unified and continuous visual world despite nance and orientation; errors generally increased as viewing it in a disjointed and discontinuous manner.
    [Show full text]
  • SINCLAIR-THESIS-2019.Pdf
    COGNITIVE MECHANISMS OF TRANSSACCADIC PERCEPTION A Thesis Submitted to the College of Graduate and Postdoctoral Studies In Partial Fulfillment of the Requirements For the Degree of Master of Arts In the Department of Psychology University of Saskatchewan Saskatoon By AMANDA SINCLAIR © Copyright Amanda Jane Sinclair, September 2019. All rights reserved. PERMISSION TO USE In presenting this thesis/dissertation in partial fulfillment of the requirements for a Postgraduate degree from the University of Saskatchewan, I agree that the Libraries of this University may make it freely available for inspection. I further agree that permission for copying of this thesis/dissertation in any manner, in whole or in part, for scholarly purposes may be granted by the professor or professors who supervised my thesis/dissertation work or, in their absence, by the Head of the Department or the Dean of the College in which my thesis work was done. It is understood that any copying or publication or use of this thesis/dissertation or parts thereof for financial gain shall not be allowed without my written permission. It is also understood that due recognition shall be given to me and to the University of Saskatchewan in any scholarly use which may be made of any material in my thesis/dissertation. DISCLAIMER Reference in this thesis/dissertation to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the University of Saskatchewan. The views and opinions of the author expressed herein do not state or reflect those of the University of Saskatchewan, and shall not be used for advertising or product endorsement purposes.
    [Show full text]
  • Transsaccadic Memory: Building a Stable World from Glance to Glance
    Elsevier AMS Ch10-I044980 Job code: EMAW 14-2-2007 1:06p.m. Page:213 Trimsize:165×240MM Chapter 10 TRANSSACCADIC MEMORY: BUILDING A STABLE WORLD FROM GLANCE TO GLANCE DAVID MELCHER Oxford Brookes University, UK, and University of Trento, Italy CONCETTA MORRONE San Raffaele University and Institute of Neuroscience of the National Research Council, Italy Eye Movements: A Window on Mind and Brain Edited by R. P. G. van Gompel, M. H. Fischer, W. S. Murray and R. L. Hill Copyright © 2007 by Elsevier Ltd. All rights reserved. Basal Fonts:Times Margins:Top:4.6pc Gutter:4.6pc Font Size:10/12 Text Width:30pc Depth:43 Lines Elsevier AMS Ch10-I044980 Job code: EMAW 14-2-2007 1:06p.m. Page:214 Trimsize:165×240MM 214 D. Melcher and C. Morrone Abstract During natural viewing, the eye samples the visual environment using a series of jerking, saccadic eye movements, separated by periods of fixation. This raises the fundamental question of how information from separate fixations is integrated into a single, coherent percept. We discuss two mechanisms that may be involved in generating our stable and continuous perception of the world. First, information about attended objects may be integrated across separate glances. To evaluate this possibility, we present and discuss data showing the transsaccadic temporal integration of motion and form. We also discuss the potential role of the re-mapping of receptive fields around the time of saccades in transsaccadic integration and in the phenomenon of saccadic mislocalization. Second, information about multiple objects in a natural scene is built up across separate glances into a coherent representation of the environment.
    [Show full text]
  • Understanding the Function of Visual Short-Term Memory: Transsaccadic Memory, Object Correspondence, and Gaze Correction
    Journal of Experimental Psychology: General Copyright 2008 by the American Psychological Association 2008, Vol. 137, No. 1, 163–181 0096-3445/08/$12.00 DOI: 10.1037/0096-3445.137.1.163 Understanding the Function of Visual Short-Term Memory: Transsaccadic Memory, Object Correspondence, and Gaze Correction Andrew Hollingworth and Ashleigh M. Richard Steven J. Luck University of Iowa University of California, Davis Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. The authors hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a task faced by the visual system thousands of times each day. In 4 experiments, memory-based gaze correction was accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load interfered with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate that VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system. Keywords: visual short-term memory, visual working memory, eye movements, saccades, gaze correc- tion Human vision is active and selective.
    [Show full text]
  • Transsaccadic Integration Relies on a Limited Memory Resource
    Journal of Vision (2021) 21(5):24, 1–12 1 Transsaccadic integration relies on a limited memory resource Garry Kong* Department of Psychology, University of Cambridge, UK Lisa M. Kroell* Department of Psychology, University of Cambridge, UK Sebastian Schneegans Department of Psychology, University of Cambridge, UK David Aagten-Murphy Department of Psychology, University of Cambridge, UK Paul M. Bays Department of Psychology, University of Cambridge, UK Saccadic eye movements cause large-scale necessarily means withdrawing it from others. To transformations of the image falling on the retina. support detailed and stable scene perception across Rather than starting visual processing anew after each eye movement-induced displacements, it has been saccade, the visual system combines post-saccadic proposed that information from previous fixations can information with visual input from before the saccade. be used to supplement current foveal input in a process Crucially, the relative contribution of each source of known as transsaccadic integration (Irwin & Andrews, information is weighted according to its precision, 1996). consistent with principles of optimal integration. We Because transsaccadic integration relies on reasoned that, if pre-saccadic input is maintained in a information from the recent past to facilitate resource-limited store, such as visual working memory, performance in the present, an intuitive hypothesis is its precision will depend on the number of items stored, that visual working memory contributes to the process as well as their attentional priority. Observers estimated (Aagten-Murphy & Bays, 2019; Irwin, 1991; Prime, the color of stimuli that changed imperceptibly during a saccade, and we examined where reports fell on the Vesia, & Crawford, 2011).
    [Show full text]
  • Thesis Document (1.403Mb)
    TRANSSACCADIC MEMORY OF MULTI-FEATURE OBJECTS by Jerrold Jeyachandra A thesis submitted to the Centre for Neuroscience Studies In conformity with the requirements for the degree of Masters of Science Queen’s University Kingston, Ontario, Canada (September, 2017) Copyright © Jerrold Jeyachandra, 2017 Abstract Our visual world is observed as a complete and continuous percept. However, the nature of eye movements, saccades, preclude this illusion at the sensory level of the retina. Current theories suggest that visual short-term memory (VSTM) may allow for this perceptual illusion through spatial updating of object locations. While spatial updating has been demonstrated to impose a cost on the precision of spatial memory, it is unknown whether saccades also influence feature memory. This thesis investigated whether there is a cost of spatial updating on VSTM of non-spatial features. To this end, participants performed comparisons of features (location, orientation, size) between two bars presented sequentially with or without an intervening saccade. In addition, dependent on the block condition, they had to compare either one of the features or all three features; to test for memory load effects. Saccades imposed a cost on precision of location memory of the first bar in addition to a direction-specific bias; this effect held with greater memory load. Orientation memory became less precise with saccades, and with greater memory load resulted in a remembered rotation of the first bar opposite to the direction of the saccade. Finally, after a saccade, participants consistently underestimated the size of the first bar in addition to being less precise; the precision effect did not hold with greater memory load.
    [Show full text]