TRANSSACCADIC OF MULTI-FEATURE OBJECTS

by

Jerrold Jeyachandra

A thesis submitted to the Centre for Neuroscience Studies

In conformity with the requirements for

the degree of Masters of Science

Queen’s University

Kingston, Ontario, Canada

(September, 2017)

Copyright © Jerrold Jeyachandra, 2017 Abstract

Our visual world is observed as a complete and continuous percept. However, the nature of eye movements, , preclude this illusion at the sensory level of the retina. Current theories suggest that visual short-term memory (VSTM) may allow for this perceptual illusion through spatial updating of object locations. While spatial updating has been demonstrated to impose a cost on the precision of spatial memory, it is unknown whether saccades also influence feature memory. This thesis investigated whether there is a cost of spatial updating on VSTM of non-spatial features. To this end, participants performed comparisons of features (location, orientation, size) between two bars presented sequentially with or without an intervening . In addition, dependent on the block condition, they had to compare either one of the features or all three features; to test for memory load effects. Saccades imposed a cost on precision of location memory of the first bar in addition to a direction-specific bias; this effect held with greater memory load. Orientation memory became less precise with saccades, and with greater memory load resulted in a remembered rotation of the first bar opposite to the direction of the saccade. Finally, after a saccade, participants consistently underestimated the size of the first bar in addition to being less precise; the precision effect did not hold with greater memory load. Together, the current findings implicate a cost on feature memory with saccades – suggesting that non-spatial feature memory is updated along with their spatial locations.

ii

Co-Authorship

The research described in this thesis was conducted by Jerrold Jeyachandra under the supervision of Dr. Aarlenne Khan and Dr. Gunnar Blohm. Jerrold Jeyachandra finalized the design of this study based on a previously conducted pilot study. He then collected the data with the help of a research assistant, marked the data, analyzed the data, generated the results, figures, interpreted the data and wrote the first draft of the manuscript. Dr. Khan assisted with statistical testing, both Dr. Khan and Dr. Blohm helped with interpretation of data and finalizing the manuscript for submission to Journal of Vision.

iii

Acknowledgements

First, I’d like to thank my supervisor Dr. Gunnar Blohm for his invaluable guidance and support for my four-year foray into the world of research. Gunnar has provided me with opportunities to extend both my technical and research skills into multiple domains that I had never thought possible. I am also honored to have had the opportunity to participate within the international research community through

CoSMo and SfN, both of which I could have never imagined being part of prior to joining the lab. Thanks to his efforts, I have learned of new passions toward the technical aspects of working with data – inspiring my future pursuits.

I’d also like to thank Dr. Aarlenne Khan for the opportunity to take on the project that became this thesis and for entertaining my need to perform all sorts of analyses on this dataset we’ve collected.

During my time in the lab I have met some amazing individuals who have shaped my experience as a graduate student. Odelia Lee and lab foreigner Kevin Cho, my comrades during the earliest part of my research experience. Scott Murdison, a wonderful mentor and friend who has sparked my not-so- secret love for MATLAB. Sisi Xu, Alex Goettkr, Brandon Caie, Jonathan Coutinho, and Parisa Khoozani, the lab team who’ve been a constant source of support and scientific discourse.

I also want to extend my appreciation toward some of my favourite people within the CNS. Matt

Smorenburg, for the much-needed late-night coffee runs, for impromptu bar escapades, and for nailing every prediction in Game of Thrones. Joshua Moskowitz, for discussions about pretty much everything and listening to me complain about every piece of software I’ve worked with during lunch. Cindy Xiao, for being the one to goof off and design homes with, for long discussions about social issues, and for increasing my sushi intake tenfold.

Finally, I’d like to thank my family for their unwavering support despite the convoluted paths that

I tend to, and will probably continue to, take in my life.

iv

Table of Contents

Abstract ...... ii Co-Authorship ...... iii Acknowledgements ...... iv List of Figures ...... vii List of Abbreviations ...... viii Chapter 1 Introduction ...... 1 1.1 Completeness of Visual ...... 1 Chapter 2 Literature Review ...... 3 2.1 Visual Short-term Memory ...... 3 2.1.1 Mechanism of Visual ...... 3 2.1.2 Object Representation in Visual Short-term Memory ...... 7 2.1.3 Interaction between and Visual Short-Term Memory ...... 9 2.2 Integrating Information Across Saccades...... 10 2.2.1 Visual Remapping – Neurophysiological Evidence ...... 13 2.2.2 Visual Remapping - Behaviour ...... 17 2.3 Thesis Objectives and Predictions ...... 19 Chapter 3 Transsaccadic Memory of Multiple Spatially Variant and Invariant Object Features ...... 21 3.1 Abstract ...... 21 3.2 Introduction ...... 22 3.3 Methods ...... 24 3.3.1 Participants ...... 24 3.3.2 Apparatus ...... 25 3.3.3 Procedure ...... 25 3.3.4 Data Analysis ...... 28 3.4 Results ...... 29 3.4.1 Location Feature Memory is Influenced by Saccades ...... 29 3.4.2 Orientation Feature Memory is Influenced by Saccades ...... 32 3.4.3 Size Feature Memory is Influenced by Saccades ...... 34 3.4.4 Feature Memory Precision is Reduced but not Biased by Increasing Memory Load ...... 37 3.4.5 Order of Questions ...... 38 3.5 Discussion ...... 40 3.5.1 Saccade Induce Biases in the Remembered Location of an Object ...... 40

v

3.5.2 Non-spatial Features of the Object are also Influenced by the Intervening Saccade ...... 40 3.5.3 Specific Biases on Spatially-Invariant Features ...... 42 3.5.4 Saccades Increase Memory Load ...... 43 3.5.5 Eye-centered Representation of Remembered Object Location ...... 44 3.5.6 Underlying Neurological Mechanisms ...... 45 3.6 Conclusions ...... 46 Chapter 4 General Discussion ...... 47 4.1 Summary of Results and Interpretation...... 47 4.2 Implications for Trans-saccadic Perception ...... 48 4.3 Future Directions ...... 50 4.3.1 Quantifying the Contributions of Saccades and Memory Load to Uncertainty ...... 50 4.3.2 Cross-Feature Binding Across Saccades ...... 51 4.4 Conclusion ...... 52 References ...... 53

vi

List of Figures

Figure 1. Models of working memory...... 4 Figure 2. Models of trans-saccadic perception...... 11 Figure 3. Visual remapping...... 17 Figure 4. Task stimuli and sequence for the single feature location experiment...... 27 Figure 5. Location feature psychometric functions...... 31 Figure 6. Orientation feature psychometric functions...... 33 Figure 7. Size feature psychometric functions...... 35 Figure 8. Question order...... 39 Figure 9: Proposed model of spatial updating...... 48

vii

List of Abbreviations

TSM Trans-saccadic Memory

VSTM Visual Short-term Memory

FIT Feature Integration Theory

ERP Event Related Potential

WM Working Memory

CDA Contralateral Delay Activity

MD Mediodorsal Thalamus

LIP Lateral Interparietal Area

FEF Frontal Eye Fields

SC

PSE Point of Subjective Equivalence

JND Just-Noticeable Difference

ANOVA Analysis of Variance

PEF Parietal Eye Fields

TMS Transcranial Magnetic Stimulation

viii

Chapter 1

Introduction

1.1 Completeness of Visual Perception

The perception of a complete and continuous visual scene is a remarkable given the limitations presented by the eye. Our eyes are structured such that approximately 1-2o of the visual scene projected onto the retina is encoded with high acuity (Polyak, 1942; Wandell, 1995).

Because of this, we do not acquire much information from a single fixation. Saccades, rapid darting eye movements, allow us to foveate at multiple locations to extract additional visual information from the environment (Mackay, 1973). This simple and frequent, behaviour presents a difficult problem for the brain to solve at multiple levels of visual processing. Given the nature of saccades we should expect to observe discontinuous snapshots of the visual scene, contrary to what we experience. In addition, the projection of an object onto the retina shifts during a saccade, implying motion from a purely retinal point of view (Helmholtz, 1867). Despite this, we perceive the world as stable and unmoving, thus the brain must be able to distinguish retinal shifts due to external motion vs. motion incurred by saccades. Together, this suggests that the brain maintains an internal representation of the environment that is updated and integrated across saccades (Deubel, Schneider, & Bridgeman, 2002; Hayhoe, Lachter, & Feldman, 1991;

Henriques, Klier, Smith, Lowy, & Crawford, 1998; Higgins & Rayner, 2015). This updated representation is termed trans-saccadic memory (TSM) (Deubel et al., 2002).

Much of the research into TSM has focused on how spatial information is maintained and updated across saccades (Prime et al., 2007, Hayhoe et al., 1991; Golomb & Kanwisher, 2012).

This type of research has established that we can accurately point to a remembered object’s location after making a saccade. Neurophysiological evidence also supports the idea of a spatial updating process within higher level visual and motor planning centres (Duhamel, Colby, &

1

Goldberg, 1992; Medendorp, Goltz, Vilis, & Crawford, 2003; Sommer & Wurtz, 2006; Umeno &

Goldberg, 2001). However, object identities are not exclusively tied to their spatial coordinates — non-spatial features such as orientation, color, shape are also integral to object identity. Prime et al., (2007) observed that low-level (luminance and orientation) features can be maintained across saccades accurately with capacities matching that of visual short-term memory (VSTM). Thus, it is apparent that we can extract feature information and maintain it in memory across saccades.

What remains unclear is whether object features are independent from the updating process that saccades induce. Here, we will discuss the current work on of objects and how

TSM extends this system with spatial updating.

2

Chapter 2

Literature Review

2.1 Visual Short-term Memory

Properties of VSTM have been investigated for quite some time prior to the emergence of trans-saccadic research. Phillips (1974), determined that VSTM was a distinct mechanism from fast decaying, yet precise, . Following this original distinction, Baddeley & Logie

(1999) proposed a multicomponent model of working memory (WM) of which VSTM is a subsystem. In this model, memory is separated across multiple domains: the visuospatial sketchpad (of which VSTM is part of), episodic buffer, and verbal memory. An important property of this model is that memory domains are independent and thus do not interfere with each other. Consistent with this idea, Cocchini et al. (2002) observed that performing both a visual and verbal memory task did not reduce recall performance compared to performing either task alone. Furthermore, adding a verbal articulation task on top of verbal memory load impaired performance. This detrimental effect of within-modality interference and the absence of cross- modal interference supports the multicomponent model of memory (Allen, Hitch, & Baddeley,

2009; Baddeley & Hitch, 1974; Cocchini et al., 2002; Fougnie & Marois, 2011). Neuroimaging studies also show differential brain area activation across memory domains (Awh et al., 1996;

Jonides et al., 1993; Smith et al., 1995). With VSTM established as its own system of representing information, studies have attempted to characterize its capacity and properties.

2.1.1 Storage Mechanism of Visual Working Memory The mechanism by which information is stored in VSTM provides an interpretive framework for understanding how stored information may be influenced by saccades (Fig. 1).

3

Figure 1. Models of working memory. Taken from Bays et al., (2014). (a) Slot model of working memory where 3-4 items take up a slot of memory with equal precision (Cowan, 2001; Miller, 1956). The distribution shown below indicates equal precision with increasing set size with guessing occurring past 3-4 items. (b) Shared resource allocation, items utilize an equal amount of memory resources resulting in reduced precision with growing set sizes (Bays & Husain, 2008; Palmer, 1990). (c) Fixed-resolution slot model, slots containing a fixed resolution are allocated for up to 3-4 items (Zhang & Luck, 2008). Items encoded within multiple slots have greater precision. (d) Flexible resource allocation model, memory is modelled as a pool of resources that can be flexibly allocated to remembered items (Bays, Catalao, & Husain, 2009).

Historically, slot models were used to describe memory (Cowan, 2001; Miller, 1956;

Saaty & Ozdemir, 2003). This model posits that VSTM is organized as having a fixed number of slots that can be allocated to storing information. Critically, even when one item is stored, it receives only a slot of memory thus the precision of one item is equivalent to the precision of two

4

items. Evidence for a discrete, limited capacity VSTM stems from early work using categorical stimuli (Luck & Vogel, 1997; Pashler, 1988; Vogel, Woodman, & Luck, 2001). People are found to remember up to 3-4 items (sometimes more, dependent on task) and fail to encode additional items. This rather simple framework of memory has been challenged due to an implicit assumption it carries (Macmillan & Creelman, 1991). Slot models assume perfect resolution for each item stored in memory, thus it ignores the internal state of the memory that is held. This is largely because most of the earlier studies used a categorical change detection paradigm, thus the effect of set size on precision could not be measured. For example, slot models would not predict false positives even if a distractor target is extremely similar to the remembered item when under max capacity. To test this, Wilken & Ma (2004) performed 9 experiments examining the precision in which color, orientation, and spatial frequency, can be recalled from memory with increasing set sizes. They found that increasing set sizes decreased the precision of recall across all remembered features. Using a signal detection theory model of visual memory, they found that the data can be best explained when increasing set sizes was associated with increasing noise in the representation of the item in memory; thus, decreasing precision. Their model did not need to incorporate an upper limit to memory, instead positing that noise contributes to a failure to recall items rather than a limited number of slots. This has led to a resource allocation model of memory in which a limited pool of resources is shared between each item in memory (Bays & Husain,

2008; Palmer, 1990). The resource allocation model of memory has been successfully applied to multiple visual features (van den Berg, Shin, Chou, George, & Ma, 2012; Zokaei, Gorgoraptis,

Bahrami, Bays, & Husain, 2011).

While the precision of memory provides a view into the internal state of remembered items, it does not give information about where errors originate from. Studying error distributions during memory tasks have provided additional insights into the representation of memory in the brain. In line with this idea, Zhang & Luck (2008) investigated the error distribution of a color

5

memory task. Participants viewed a set colored squares and had to recall the color that matched a probe congruent to its location. Cuing with 70% reliability prior to stimulus presentation was also used; a resource allocation model predicts additional resource allocation to the cued item. The error distribution for color report was found to be a mixture of a Gaussian distribution with a uniform distribution component. The Gaussian error distribution indicates that the participant remembered the color, but was imprecise. The classical slot model predicts that precision remains the same between all set sizes between 1-4. Inconsistent with this, precision decreased as set size increased from 1-4 items and remained constant from 4-6 items. Alternatively, if resource allocation was flexible, then colors that were cued should show a narrower Gaussian component; more allocated resources should result in better precision. Contrary to this, the width of the

Gaussian component was the same for both cued and un-cued conditions; indicating inflexible resource allocation. Furthermore, the uniform distribution was interpreted to have been due to guessing when set sizes exceeded capacity — thus, a total failure to encode an item into memory.

This was interpreted to suggest that an upper limit does exist, thus an extension to the slot model was proposed — a fixed-resolution slot model. In this model, each slot contains a fixed resolution that can be flexibly allocated. Thus, when one item is stored in memory, all slots can be used to represent the item; conferring additional precision in recall. This model could explain both the increasing width of the error distribution, the apparent capping of this growth, and the growing uniform distribution with larger set sizes. Fixed-resolution slot models are supported by neural models of visual working memory with reverberant excitatory networks where weak allocation cannot be supported due to competitive all-or-none attractor dynamics (Wang, 2001). Bays et al.

(2009) disputed the interpretation of the former study suggesting that a resource model does in fact allow for the uniform component of the observed mixture distribution. They were able to reproduce the mixture distribution observed by Zhang & Luck (2008) but posited that in addition to memory load by color, identifying which of the remembered colors to report based on a spatial

6

cue imposed an additional location memory load as well. This was confirmed by an analysis of the error distribution for targets that were not probed. Participants had a tendency for reporting un-probed targets, suggesting that the apparent uniform distribution was a result of mis-binding between location and color features, rather than uninformed guessing. Resource-type models can be implemented by population coding and allocation of a finite pool of neurons to the representation of memorized information (Dayan & Abbott, 2001). More recent neurally inspired models take advantage of higher-order structured representations (Brady & Tenenbaum, 2013), hybrid slot/resource implementations (Swan & Wyble, 2014) and interference (Oberauer,

Lewandowsky, Farrell, Jarrold, & Greaves, 2012). A detailed discussion of these models is outside the scope of this thesis.

2.1.2 Object Representation in Visual Short-term Memory It has been long established that short-term memory has a capacity of approximately 3-4 items with some variance contingent on the task being performed (for an extensive review:

Cowan, 2001). Note that many of the described studies below assume discrete capacities for memory in their interpretation and analysis of VSTM; we will come back to this point later. In the original study of visual WM capacity, Sperling (1960) had participants remember alphanumeric characters and extracted a capacity of 4-5 items. However, alphanumeric objects provide an opportunity for both visual and verbal memory systems to encode information. To better characterize visual WM, Vogel et al. (2001) had participants perform a series of 15 experiments involving sequential comparisons of object identities. Participants remembered 3-4 objects, irrespective of the number of features object identity. This has been used to support the idea that visual information is stored as integrated objects (Luck & Vogel, 1997). Further evidence for an integrated object hypothesis has been provided for by electrophysiological studies. Contralateral delay activity (CDA) over PO7/PO8 electrodes appears during the retention period of a VSTM task and correlates in amplitude with individual memory capacity of visual

7

objects (Jolicœur, Brisson, & Robitaille, 2008; Klaver, Talsma, Wijers, Heinze, & Mulder, 1999;

Luria, Balaban, Awh, & Vogel, 2016; Luria, Sessa, Gotler, Jolicœur, & Dell’Acqua, 2010;

McCollough, Machizawa, & Vogel, 2007). With this measure, Luria & Vogel (2011) examined

CDA amplitude correlations with the number of objects and with object complexity. Consistent with integrated object representations, they found CDA only correlated with the number of objects remembered rather than the number of features that encoded an object. However, the idea of integrated objects in VSTM has been contested, for example Alvarez & Cavanagh (2004) observed that visual information load (the complexity of the stimulus) determined the upper limit for visual memory capacity. Furthermore, Wheeler & Treisman (2002) observed VSTM capacity decreased with the increasing the number features along the same dimension (i.e multiple colors) and was independent of the number of features along different dimensions (i.e color and orientation). This finding has been replicated and extended to other experimental contexts

(Delvenne & Bruyer, 2004; Hardman & Cowan, 2015; Oberauer & Eichenberger, 2013; Olson &

Jiang, 2002; Xu, 2002).

The use of capacity as a measure for memory makes the implicit assumption that memory is allocated as slots. As discussed in Section 2.1.1, precision of a memory may provide additional insight into how memory resources are allocated; whether it is continuous or discrete. More recent studies suggest that precision decreases with an increasing number of features to encode

(Fougnie, Asplund, & Marois, 2010; Marshall & Bays, 2013). For example, Bays et al. (2011) observed that when presented with a large number of feature dimensions and set sizes, precision decreased with no fixed upper limits. Furthermore, the absolute error in color and orientation was not correlated; failure to encode one property of an object does not imply failure to encode the other. This finding is inconsistent with an integrated object representation, which predicts correlated errors since remembering an object implies remembering its constituent features.

Therefore, features of different dimensions may not compete for the same resource pool

8

compared to features along the same dimension (Fougnie & Alvarez, 2011; Fougnie, Cormiea, &

Alvarez, 2013; Ma, Husain, & Bays, 2014).

Together, these more recent studies emphasize the importance of both the measures used to study visual WM, and the task structure. It may be possible that the classical metric of visual

WM capacity, which presumes a discrete accuracy-based capacity may be insufficient for probing visual WM. Thus, it is imperative that studies carefully examine sources for memory errors as demonstrated by Bays et al., (2009).

2.1.3 Interaction between Attention and Visual Short-Term Memory Attempting to encode all items in the visual scene is a fruitless task, we may either fail to store important items or not allocate enough resources to store any single item with appreciable precision. Visuospatial attention is a mechanism by which we can selectively prioritize items in the visual scene. Attention has also been found to facilitate the consolidation of perceived objects into VSTM (Bundesen, 1990; Duncan & Humphreys, 1989; Wheeler & Treisman, 2002).

Schmidt et al. (2002) had participants perform a task where an array of colored squares was presented with a delay interval followed by probe at one of the locations. Performance was high when cuing and probing locations were congruent compared to when they were incongruent.

Furthermore, performance from incongruence decreased with increasing distance between the cued and probed location. This suggests that in addition to the role of facilitating storage in

VSTM, the probability of encoding falls as you move away from the centre of attentional locus.

An intuitive extension of this idea is that attention acts as a gatekeeper for information that is encoded into VSTM (Awh, Vogel, & Oh, 2006). The phenomena of an attentional blink, characterized as a failure to remember the last of two items presented rapidly in serial, has been used to support this idea (Broadbent & Broadbent, 1987; Chun & Potter, 1995; Duncan, Ward, &

Shapiro, 1994; Raymond, Shapiro, & Arnell, 1992; Reeves & Sperling, 1986; Weichselgartner &

Sperling, 1987). This has been attributed to slow attentional processes that has switching times in

9

the order of hundreds of milliseconds (Duncan et al., 1994). Electrophysiological findings agree with this finding; the inferior temporal cortex, associated with higher level visual processing, has been observed to suppress activation for ignored items with similar timescales (Chelazzi, Miller,

Duncan, & Desimone, 1993; Moran & Desimone, 1985). Further evidence for a top-down gating mechanism is provided for by preferential encoding of the second item if it is especially salient to the participant; their own name for example (Shapiro, Caldwell, & Sorensen, 1997). This observation is especially remarkable as it suggests that information is processed first in the early and mid-visual system prior to being selectively facilitated into VSTM by attention (Luck, Vogel,

& Shapiro, 1996; Maki, Frigen, & Paulson, 1997; Shapiro et al., 1997; Vogel, Luck, & Shapiro,

1998). Electrophysiological evidence in the form of a large N400 peak, an indicator of semantic context mismatch, occurs during attentional blinking even though the participant cannot report the second word (Luck et al., 1996). Neuroimaging approaches also support the idea of higher level processing in the medial temporal area despite attentional blinking (Marois, Yi, & Chun,

2004). The above research is founded on the assumption that attention is a top-down control mechanism for selecting information. Recent work suggests that even when a cue does not reliably predict an upcoming probe, reaction times are faster when cue and probe happen to be spatially congruent (Theeuwes, Belopolsky, & Olivers, 2009). Thus, it is likely that both endogenous and exogenous systems mediate attentional facilitation of visual processing. In summary, attention is a mechanism by which information is selected and encoded into VSTM.

This process is a critical component of how saccades influence stored representations of the environment.

2.2 Integrating Information Across Saccades

Early work investigating how the brain produces a perception of a complete environment theorized that the visual system supports an integrative visual buffer that accumulates information with each fixation (Rayner & McConkie, 1976). This intuitive idea posits that individual

10

snapshots (fixations) are ‘spliced’ together over saccades to create a detailed spatiotopic map of the visual scene (Fig. 2). An initial study by Jonides et al. (1982), seemed to support this hypothesis. When the presentation of two patterns of dot arrays were separated by an intervening saccade, participants could report the spatial summation, providing evidence for accurate integration of spatial arrangements. However, subsequent work revealed the integrative effect was attributed to slow phosphor decay in CRTs used in the original experiment (Bridgeman &

Mayer, 1983; Irwin, Yantis, & Jonides, 1983; Jonides et al., 1982; O’Regan & Lévy-Schoen,

1983; Rayner & Pollatsek, 1983). A separate set of experiments used a change detection paradigm where participants were presented with a change in the scene during a saccade or visual mask. Participants could not detect changes in the scene, arguing against the existence of a fine spatial map (Grimes, 1996; McConkie & Currie, 1996; Rensink, O’Regan, & Clark, 1997).

Attentional Locus

Figure 2. Models of trans-saccadic perception. Left image shows pre- and post-saccadic fixation points, yellow dashed circle shows where spatial attention is allocated. The visual integrative buffer model posits that all information is stored across saccades, thus a complete representation is maintained across memory. The VSTM model suggests only items that are attended to are remembered. Thus, a limited number of items is held across saccades.

Should we then conclude that no information is retained across saccades, and that the visual slate is wiped clean with each new fixation (O’Regan, 1992; O’Regan & Noë, 2001)? Not necessarily, evidence from parafoveal preview studies suggest that pre-saccadic information is

11

encoded and facilitates post-saccadic processing (Henderson, Pollatsek, & Rayner, 1987;

Henderson & Siefert, 2001; Pollatsek, Rayner, & Collins, 1984; Schotter, Angele, & Rayner,

2012). Furthermore, Hayhoe et al. (1991) observed that spatial relationships between points are preserved across saccades with slightly reduced performance compared to the fixation condition.

How is this any different that the integrative visual buffer that was rejected earlier? A major difference stems from the finding that the representations carried through saccades are rather imprecise and limited. For example, ChANgINg CaSE of a parafoveally previewed word across saccades is not noticed yet it facilitates foveal processing, this suggests that letter identity is maintained across saccades whereas precise low-level features are not preserved (Henderson,

1994; McConkie & Zola, 1979; Slattery, Angele, & Rayner, 2011). Together these findings paint a picture that more abstract information is maintained across saccades rather than the precise sensory information that is implied through the integrative visual buffer hypothesis. This alternative theory of visual constancy stems from early work done by Irwin (1991) where he observed that participants could make comparisons between two random dot patterns presented with an intervening saccade. The ability to accurately compare (same/different) the patterns decayed over time, decreased with increasing stimulus complexity (defined as number of items to be encoded), and was not exclusively tied to retinotopic coordinates. These characteristics were interpreted to suggest that information can be maintained across saccades but was mediated by an imprecise slow-decay mechanism, implicating the role of VSTM in trans-saccadic perception.

Furthermore, approximately 3-4 items can be held across saccades, consistent with capacity limits for VSTM (Irwin, 1992; Prime et al., 2007).

As discussed earlier, attention is a mechanism by which information is selected to be encoded into VSTM. Parallels between VSTM and TSM are seen in tasks where cuing prior to a peri-saccadic change in scene results in better than chance performance

(Irwin, 1998; Rensink et al., 1997). A critical aspect of saccades is that they elicit involuntary

12

shifts of attention toward the saccade target (Kowler, Anderson, Dosher, & Blaser, 1995). Given the role of visual attention in facilitating consolidation into VSTM, we should expect that saccade targets be preferentially encoded in VSTM. Consistent with this idea, memory performance is better for saccade targets and items within its spatial locus (Bays & Husain, 2008; Henderson &

Hollingworth, 1999, 2003; Shao et al., 2010).

2.2.1 Visual Remapping – Neurophysiological Evidence At the earliest stages of visual processing, information is exclusively represented in retinotopic coordinates. Since information stored in VSTM is not exclusively bound to retinotopic coordinates, a simple hypothesis is that the brain maintains a spatiotopic representation of objects that is invariant to saccades in higher level visual areas. Despite the intuitive appeal of this hypothesis, there is mounting evidence suggesting that both the early and late visual system is functionally organized in retinotopic coordinate space (Gardner, Merriam, Movshon, & Heeger,

2008; Golomb, Chun, & Mazer, 2008; Golomb, Nguyen-Phuc, Mazer, McCarthy, & Chun, 2010;

Golomb & Kanwisher, 2012a, 2012b). However, there is some evidence for spatiotopic coding during inhibition of return (Mathôt & Theeuwes, 2010; Pertzov, Dong, Peich, & Husain, 2012), and when attention is diverted from the fovea (Crespi et al., 2011; d’Avossa et al., 2007). Golomb

& Kanwisher (2012b) reconcile some of these differences, suggesting that there is native retinotopic coordinate system that can be converted into spatiotopic coordinates with a cost in precision when needed. Given that VSTM is not bound to retinotopic coordinates, a spatial updating process is needed to maintain a seemingly spatiotopic memory of remembered item locations. Theories about how saccades interact with this representation are informed by early studies on how the brain maintains visual stability despite sudden shifts in the retinal image.

In 1950, Roger Sperry proposed a theory suggesting that the brain accounts for shifts in the retinal image through a corollary discharge signal (Sperry, 1950). This theory posits that an efference copy of the saccade motor command is fed to higher level perceptual centres

13

(Bridgeman, Van der Heijden, & Velichkovsky, 1994; Helmholtz, 1867). An alternative theory is that proprioceptive information can simply be used to determine the displacement of the eye across fixations (Sherrington, 1918). Early support for the former theory is provided for by

Skavenski et al. (1972) who paralyzed the eye via weights attached to contact lenses and tested the motor outflow vs sensory inflow hypothesis. They found that when load was applied to the eye, participants tended to report a stationary image being shifted to the opposite direction of the load, disrupting the perception of visual stability. These findings are consistent with the motor outflow hypothesis since proprioceptive reliance should not result in detection of shifts in the image since the eye is not rotated. Disruptions of perceptual stability is also seen via simple mechanical perturbations to the eye, such as light tapping (Bridgeman & Stark, 1991; Ilg,

Bridgeman, & Hoffmann, 1989). This is consistent with a theory of motor outflow since passive load or mechanical perturbation would not generate the motor signals needed to compensate for the shift in eye position; creating a feeling of movement. Although these findings provide evidence for the usefulness of corollary discharge, there is an important caveat to interpreting these results: compensation for a passive load is not equivalent to a planned saccade. Inactivation studies provide evidence for the neural implementation of saccadic corollary discharge and its origins. Sommer & Wurtz (2002) hypothesized that corollary discharge may originate from the mediodorsal thalamus (MD) due to afferent connections from oculomotor areas and ascending projections to frontal saccade planning regions. Inactivation of MD in monkeys results in failure to account for the updated eye position from the first saccade in a double-step paradigm. More specifically, when making a saccade to the second target, monkeys did not accurately update the motor plan or positional memory of the second target after making the first saccade. This implicates the role of the thalamus in relaying a corollary discharge signal to executive control areas of the brain. Inactivation of the thalamus also reduced the monkeys’ ability to detect variations in saccade amplitudes across trials (Sommer & Wurtz, 2004). Furthermore, patients

14

with thalamic lesions cannot accurately perform memory guided saccades if any form of eye movement (smooth pursuit, suppressed vestibulo-oculor reflex, saccade) occurs during the retention interval (Gaymard, Rivaud, & Pierrot-Deseilligny, 1994). It is important to note that these studies link corollary discharge to action planning rather than perception per se. In a clever study by Cavanaugh et al. (2016), monkeys were trained to report peri-saccadic target displacements. They found that under MD inactivation the monkey’s perception of their saccadic vector was biased, despite invariance of the saccade amplitude to inactivation. This is the first finding of its kind that directly implicates corollary discharge with perception. What might the neural basis for this interaction be? Recent investigations into frontal regions suggests that corollary discharge may allow for a neural implementation of spatial updating (for review see:

Wurtz, 2008).

Perhaps the most convincing evidence for a spatial updating process underlying visual stability is the apparent shifting of receptive fields just prior to a saccade, called visual remapping

(Fig. 3) (Duhamel, Colby, & Goldberg, 1992). Duhamel et al. (1992) observed that spiking in lateral interparietal area (LIP), encoding a stimulus location in retinotopic space, shifts to a receptive field location that the object would occupy in post-saccadic retinotopic space. These findings have also been extended to frontal eye fields (FEF) (Sommer & Wurtz, 2006; Umeno &

Goldberg, 2001) and superior colliculus (SC) (Dunn, Hall, & Colby, 2010; Walker, Fitzgibbon, &

Goldberg, 1995). Remapping in human cortex has been found in parietal eye fields (PEF) analogous to monkey LIP using functional imaging (Medendorp et al., 2003; Merriam, Genovese,

& Colby, 2003). However, displacement of activity does not necessarily indicate that receptive fields of neurons are transferred. An alternative idea proposed by Cavanagh et al. (2010) is that this neural behaviour is a transfer of activity in the form of location pointers rather than shifting receptive fields. An important implication of this hypothesis is that updating is a purely spatial process, with location pointers being updated whereas object representations are abstracted away.

15

Irrespective of the computation underlying visual remapping, it provides a neural mechanism of maintaining spatial information across saccades. Visual remapping is also a limited process, only items that are attended to are strongly represented in regions involved in remapping (Goldberg,

Gottlieb, & Kusunoki, 1998; Shen & Pare, 2014). Furthermore, the number of cells that display remapping activity decreases as you move to earlier visual areas (Nakamura & Colby, 2002). This supports the idea that remapping influences higher level representations such as VSTM rather than low level feature information as predicted by the visual integrative buffer theory. Building on this idea, shifting activations in FEF is observed when a stimulus is flashed some time before a saccade in cells that would have brought the, now absent, stimulus in its receptive field (Umeno

& Goldberg, 2001). Together, this suggests that items that are attended are likely to be maintained across saccades whereas irrelevant items are weakly represented and thus lost – consistent with

VSTM. However, it is important to note that these studies do not necessarily link this type of activity, although mechanistically convenient, to perception. Transcranial magnetic stimulation

(TMS) application near the time of saccade over right PEF impairs TSM such that one item, instead of four, is stored across a saccade (Prime, Vesia, & Crawford, 2008); these findings have also been extended to FEF (Prime, Vesia, & Crawford, 2010). In summary, visual remapping is a mechanism by which information can be spatially updated, presumably using extra-retinal signals such as corollary discharge (Cavanaugh et al., 2016). Finally, we will close this review by examining how these neural mechanisms might inform behavioural studies of TSM.

16

A

B

Figure 3. Visual remapping. Taken from Sommer & Wurtz (2004). (A) Monkeys are probed during fixation at the receptive field (RF) of an FEF neuron. Just prior to a saccade a probe is flashed in the future retinotopic RF of the neuron. (B) Example FEF neuron, probing native RF elicits a response during fixation. Just prior to a saccade, probing the future RF now elicits a response whereas the current RF does not.

2.2.2 Visual Remapping - Behaviour Early evidence for spatial updating was originally observed in Hayhoe et al. (1991) where the spatial relationship between items was maintained across saccades. However, this only informs us that information is maintained across saccades, it does not tell us about the specific properties underlying the spatial updating process. We draw on neurophysiological evidence to derive a high-level mechanistic explanation of TSM. With a primarily retinotopic coding scheme, this would imply that with each saccade, positional information is updated using extra-retinal

17

signals to compensate for the saccade. Prime et al. (2006) tested this hypothesis by quantifying the effect of saccade metrics on remembered locations. They observed that, with horizontal saccades, pointing errors to remembered locations increased with saccade amplitude. Increasing error with larger saccade amplitude suggests a mechanism of signal-dependent noise in the generation or integration of corollary discharge. Consistent with this, the threshold for detection in peri-saccadic target shifts increases with saccade amplitudes (Bansal, Jayet Bray, Peterson, &

Joiner, 2015; Jayet Bray, Bansal, & Joiner, 2016; Niemeier, Crawford, & Tweed, 2003). Together these findings suggest that information needs to be spatially updated across saccades and this presents a signal-dependent cost in performance in the form of reduced precision. These results could also be explained by uncertainty at the time of encoding; targets requiring larger saccade amplitudes are further in the periphery (Hess & Field, 1993). To our knowledge, no studies to date have delineated the origins of saccade amplitude-dependent uncertainty. Irrespective of this, these findings seem to preclude a spatiotopic representation underlying spatial memory. To further test this, Golomb & Kanwisher (2012) tested spatial memory performance for a spatially locked item and a retinotopically locked item. If spatiotopic representations underlie spatial memory, spatiotopic performance should not decline with an increasing number of saccades since the coordinates should be invariant to saccades. Contrary to this, performance for the spatially locked item declined as the number of intervening saccades increased. This supports the idea that items are coded in retinotopic space with a cost associated with spatial updating. Together the above literature suggests that spatial memory undergoes a costly spatial updating process with saccades, consistent with visual remapping found in higher level perceptual centres (Duhamel et al., 1992; Dunn et al., 2010; Sommer & Wurtz, 2006; Umeno & Goldberg, 2001; Walker et al.,

1995). So far, we have provided evidence for how spatial memory is influenced by saccades, however object identities are also an important part of VSTM. As discussed in the feature binding section, object features and identities are bound to their location. If the representation of object

18

features is tightly bound to the spatial locus we should expect spatial updating to influence feature memory as well. Alternatively, if object feature information is loosely bound we should not expect saccades to influence feature memory. Thus, we ask: is the spatial update process exclusively spatial?

2.3 Thesis Objectives and Predictions

Work examining how saccades influence feature representations in VSTM is sparse. To our knowledge, only one study has examined the influence of saccades on object features (Prime et al., 2007). As expected capacity limits matched that of VSTM, in addition accuracy decreased as a function of saccadic amplitude. Unlike spatial memory studies, a model of target eccentricity

(peripheral encoding) could not explain the amplitude-error relationship in feature memory.

Finally, consistent with noisy spatial updating and tight binding between spatial and feature memory, a model of noisy signal-dependent extra-retinal signal updating could explain the data.

However, some gaps remain with respect to how saccades influence feature memory and its relationship to its spatial identity; we address two of these in our thesis. Most items in our natural environment are described by a conjunction of multiple features, rather than a single feature. How might saccades influence feature memory if multiple features are bounded together to a single spatial locus? In addition, precision of memory is an important measure that may provide insight into the properties of spatial updating across saccades – no studies to date have quantified how saccades influence feature precision. We aim to address these two points within this thesis study.

To this end, we implemented a feature memory comparison task across objects with multiple feature conjunctions. We investigated how feature memory is influenced by comparing psychometric performance of feature comparisons with and without intervening saccades. In addition, as saccades induce involuntary allocation of memory resources towards the task irrelevant target (Shao et al., 2010), we tease out memory effects with single feature and feature conjunction memory conditions. With this design, we aim to make inferences of how saccades

19

influence memory representations to gain insight into how object features are bound to their spatial locus. With our design, we make two predictions:

1. If a saccade influences feature memory across all features this would imply that features

hold a shared representation in VSTM, in addition to being tightly bound to an object’s

spatial memory.

2. Alternatively, if a saccade only influences spatial memory this would imply that feature

memory is abstracted away from an object’s spatial representation in memory.

20

Chapter 3

Transsaccadic Memory of Multiple Spatially Variant and Invariant

Object Features

3.1 Abstract

Trans-saccadic memory is a process by which remembered object information is updated across a saccade. To date, studies on trans-saccadic memory have used simple stimuli, i.e. a single dot or feature of an object. It remains unknown how trans-saccadic memory occurs for more realistic, complex objects with multiple features. An object’s location is a central feature for trans-saccadic updating as it is spatially variant, but other features such as an object’s size are spatially-invariant. How these spatially variant and invariant features of an object are remembered and updated across saccades is not well understood. Here we tested how well 14 participants remembered either three different features together (location, orientation, and size) or a single feature at a time, of a bar either while fixating or with an intervening saccade.

We found that the intervening saccade influenced memory of all three features, with consistent biases of the remembered location, orientation and size that were dependent on the direction of the saccade. These biases were similar whether participants remembered either a single feature or multiple features and were not observed with increased memory load (single vs. multiple features during fixation trials), confirming that these effects were specific to the saccade updating mechanisms.

We conclude that multiple features of an object are updated together across eye movements, supporting the notion that spatially invariant features of an object are bound to their location in memory.

21

3.2 Introduction

When we look around, we experience the visual scene as continuous. However, we receive a limited amount of information from a single fixation, necessitating multiple saccades to gather information from the scene. Given the disjointed nature of saccades we should expect to experience snapshots of the visual scene, in contrast to our actual experience (Hollingworth & Henderson,

1998; Mackay, 1973). Visual constancy is thought to be achieved through the formation of an internal representation that is preserved across saccades through mechanisms involving visual memory (Higgins & Rayner, 2015; Prime et al., 2007). To build an internal representation, information about viewed objects at each fixation in the visual scene must be represented in memory then updated across each saccade, referred to as trans-saccadic memory (Germeys, De

Graef, & Verfaillie, 2002; Henderson & Hollingworth, 1999; Henderson & Siefert, 1999; Irwin,

1991, 1992; Irwin & Andrews, 1996; Medendorp, 2011; Melcher & Colby, 2008; Palmer & Ames,

1992; Pollatsek, Rayner, & Collins, 1984; Prime, Niemeier, & Crawford, 2006; Prime et al., 2007).

Studies have demonstrated that trans-saccadic memory is similar to visual short-term memory (VSTM) and may involve the same mechanisms (Alvarez & Cavanagh, 2004; Bays &

Husain, 2008; Cowan, 2011; Cowan & Rouder, 2009; Donkin, Nosofsky, Gold, & Shiffrin, 2013;

Luck & Vogel, 1997; Pashler, 1988; Zhang & Luck, 2008). For example, Prime et al., (2007) demonstrated that the limit of the number of objects remembered across saccades was similar to that of visual-short term memory during fixation. Presumably, remembering multiple features of the same object rather than multiple objects should also show similar limits for trans-saccadic memory as has been shown for VSTM (Bays et al., 2011; Fougnie et al., 2010; Oberauer &

Eichenberger, 2013; Wheeler & Treisman, 2002) but to our knowledge, no study has yet measured memory recall for multiple features of the same object across saccades. It has been suggested that VSTM for different features of an object are stored in independent memory stores

(Allen, Baddeley, & Hitch, 2006; Bays et al., 2011; Pasternak & Greenlee, 2005; Robertson,

22

2003; Treisman, 1998; Treisman & Schmidt, 1982; Wheeler & Treisman, 2002; Wolfe & Cave,

1999), but nevertheless may be bound by their location (Cave & Wolfe, 1990; McCollough,

1965; Reynolds & Desimone, 1999; Schneegans & Bays, 2017; Theeuwes, Kramer, & Irwin,

2011; Treisman & Gelade, 1980; Treisman, 1998; Wolfe & Cave, 1999).

Once a saccade is made, this remembered information must be updated to account for the saccade. It is important to note that updating is a spatial process and thus, the saccade requires updating the location of objects. Some studies have directly measured the updating of object location by measuring the remembered locations of objects across saccades. Hayhoe, Lachter, &

Feldman (1991) showed that we are able to integrate spatial arrangements across eye movements to determine shapes from dots. Prime, Niemeier & Crawford (2006) showed that we are just as good at determining the intersection point of two lines across saccades as we are during fixation.

Golomb & Kanwisher (2012) demonstrated that remembered location of an object was better when the retinotopic location was recalled rather than spatiotopic location, but in both cases, the remembered location of the object was relatively accurate and precise. Generally, these studies suggest that we are able to accurately update object (dots or even oriented lines) locations across saccades. Other studies have provided indirect evidence of updating in trans-saccadic memory by showing perceptual integration effects on spatially invariant features e.g. color at the same predefined spatial location across saccades (Irwin, 1992; Melcher, 2005; Melcher & Morrone,

2003; Wijdenes et al., 2015). Consistent with these findings, it has been hypothesized that trans- saccadic integration involves the remapping of not just object locations, but also object features

(Cavanagh, Hunt, Afraz, & Rolfs, 2010; Melcher & Colby, 2008; Prime et al., 2006; Prime, Vesia,

& Crawford, 2011).

While these studies on trans-saccadic integration have provided insight into the mechanisms of trans-saccadic memory and updating, they have tended to investigate only how single features of objects are remembered across saccades, e.g. just location or just orientation, but

23

see (Prime et al., 2006). What remains unclear is whether and how complex objects with multiple feature conjunctions that include location are remembered and updated across saccades. Since objects in the world tend to have multiple spatially-invariant features (e.g. shape) besides location, if the object’s remembered location is updated during saccades, are the object’s other spatially- invariant remembered features also updated or influenced by the saccade? If the saccade influences the memory of all the features of the object, this would imply that the location is tightly bound to the other features in memory (Johnson, Verfaellie, & Dunlosky, 2008; Matthey, Bays, & Dayan,

2015; Melcher & Colby, 2008; Prime et al., 2011; Reynolds & Desimone, 1999; Schneegans &

Bays, 2016, 2017; Wheeler & Treisman, 2002). Alternatively, a saccade might only influence the remembered location. This would imply that the of the object’s spatially invariant features are loosely linked or even independent of its location (Robertson, 2003).

To gain insight into this question, we used a psychometric testing paradigm to quantify how multiple features of an object (location, orientation and size) are remembered across saccades.

We tested how well participants remembered the conjunction of features or a single feature of an object with or without an intervening saccade. These data have been previously published in abstract form (Khan et al., 2013).

3.3 Methods

3.3.1 Participants Fifteen participants took part in this experiment (8 male), of which 11 were naïve to the purpose of the experiment. Their ages ranged from 21 to 36 years (mean = 23.6, SD = 3.8). All participants had normal or corrected-to-normal vision and provided written consent to take part in the experiment, which was pre-approved by the Health Sciences Research Ethics Board at

Queen’s University, Kingston, Canada.

24

3.3.2 Apparatus Participants performed the experiment in a completely darkened room. A 20” Mitsubishi

DiamondPro 2070 CRT stood at a viewing distance of 30cm (16 by 12 inches, 2048 x 1536, refresh rate: 86Hz). Participants’ heads were stabilized using a head and chin rest as part of an

Eyelink tower mount set up (SR Research, Mississauga, ON, Canada). A setting of low contrast

(25%) and brightness (0%) values was used to prevent the participant from using visual cues from the monitor frame to perform the task; the monitor edge was not visible to the participants. Eye movements from the right eye were recorded using an Eyelink 1000 video-based recording system (SR Research, Mississauga, ON, Canada) at 1000 Hz. Stimuli were displayed using

Experiment Builder (SR Research, Mississauga, ON, Canada). Responses were keyed using a SR

Research Gamepad with two buttons (left and right). Prior to each block of trials, the eye tracker was calibrated by having the participant fixate a series of 9 positions on the display (8 surrounding the periphery of the display and the center).

3.3.3 Procedure Each trial in the experiment consisted of two bars presented in sequential order. The participant’s task was to memorize the features of the first bar, location, orientation and size, and to indicate how the second bar differed from the first. Figure 1 depicts the sequence of stimuli during a trial. Each trial began with a red dot (0.5° diameter) appearing at one of two locations

(9° right or 9° left from center screen, aligned horizontally) on a black background. After 750ms, a red bar appeared for 750ms (the first bar). The first bar’s features (location, orientation, and size) randomly varied from trial to trial, with a range of 1° left to 1° right in increments of 0.4° for location, 5° counter-clockwise to 5° clockwise in increments of 2° for orientation and 1.8° to

2.3° in increments of 0.1° for size. Next, the first bar was extinguished and the first dot remained on for a further 300ms. The second dot then appeared for 900ms at either the same location as the first dot (fixation trial) or at a second location, requiring an eye movement (saccade trial). There

25

were an equal number of fixation and saccade trials and they were presented in randomized order.

A second bar (also red) then appeared for 750ms. The second bar’s features (location, orientation, and size) also randomly varied from trial to trial. A blank then appeared for 100ms and was followed by a question screen. Figure 4 depicts a trial in the single feature location condition, with the question ‘Was the second bar to the Left or Right of the first bar?’, which required the participant to indicate how the 2nd bar differed in location from the first, thus providing an estimation of the remembered features of the 1st bar. The question was presented in red text on a black background centred on the screen. The participant answered the question by pressing one of two buttons on a game controller (left and right corresponding to the 1st and 2nd options in the question. The question remained on the screen until the participant pressed the button. There was an inter-trial interval of 1000 ms before the next trial began, during which a blank screen was displayed.

26

Figure 4. Task stimuli and sequence for the single feature location experiment.

Each trial began with the presentation of a red fixation dot at one of two locations (9 degrees right or left from center) on a black background for 750 ms. Next, a bar appeared and remained illuminated for 750ms. After the bar was extinguished, the fixation dot remained illuminated for an additional 300 ms. A second fixation dot was then presented at either the same location as the first fixation (fixation trial – not shown) or at the second location requiring an eye movement (saccade trial - shown). After 900ms, the 2nd bar was presented for 750ms. The 1st and 2nd bars varied randomly in location (-1.6˚ to 1.6˚ from center screen in 0.4˚ intervals horizontally), orientation (8˚ counterclockwise to 8˚ clockwise from vertical in 2˚ intervals), and size (2.25˚-2.75˚ in length in 0.1˚ intervals). The fixation dot and 2nd bar were then replaced by a blank screen for 100ms followed by a question screen ‘Was the second bar to the Left or Right of the first bar?’ (Arial, font size 30, not shown to size). The question remained visible until the participant entered a response on the gamepad.

Besides the single feature location condition, participants performed a single feature orientation condition, with the following question (‘Was the second bar rotated Clockwise or

Counter-clockwise compared to the first bar?’), and a single feature size condition with the question (‘Was the second bar Shorter or Longer than the first bar?’). In addition, participants also performed a multiple feature condition, in which they were asked to remember all three features together. Participants were asked the three questions in sequence. Each new question appeared immediately after the subject had pressed the button to answer the previous one. The

27

presentation order of the questions was counterbalanced from block to block (6 blocks total) to equalize possible memory effects. There were 96 trials in each block. Each participant performed

24 blocks (6 location feature only, 6 orientation feature only, 6 size feature only, 6 multiple- feature) of 96 trials each (2304 trials total). Participants performed all blocks in random order over the course of a few weeks, performing approximately 1-3 blocks/day.

3.3.4 Data Analysis We collected a total of 34,560 trials. Saccade timing and position were automatically calculated offline using a saccade detection algorithm with a velocity criterion of 50°/s, and verified visually. Trials where the tracker lost eye position, where participants made a saccade or a blink or there was an incorrect fixation (>3˚ radius from fixation dot) during the time when the bars were displayed were removed from the dataset, leaving 28,277 trials (82% of total trials). For each feature, we calculated the difference between the 1st and 2nd bars, resulting in a range of -2° to 2° difference in location in 0.4° intervals, a range of -10° to 10° difference in orientation in 2° intervals and in a range of -0.5° to 0.5° difference in size in 0.1° intervals. Due to this difference calculation, the number of trials for the extreme ends of ranges were very small and so we collapsed across the outer two intervals, e.g., for location, the -2° and the -1.6° intervals were collapsed to be 1.8°.

We fit each participant’s responses in each condition with psychometric functions using psignifit 3.0 toolbox with the Bayesian Inference fitting procedure (Frund, Haenel, & Wichmann,

2011) to estimate the parameters of the psychometric fit using Matlab (Mathworks).

Psychometric functions were built with a Gaussian sigmoid due to its symmetry which matched the expected properties of the task. No priors were imposed for either the mean or slope of the psychometric function. The prior of upper and lower thresholds were set to 0 as we did not expect lapse rates during the task which is confirmed by the participant’s control data (e.g. Fig. 5A). We then extracted the point of subjective equality (PSE) and just noticeable difference (JND) from

28

each fit. The PSE was determined as the degree of stimulus change needed for the observer to have 0.5 choice probability. The JND is a measure of the degree of stimulus change required in order for an observer to report a congruent change. The JND is computed as half the difference between the stimulus changes needed to elicit 0.25 and 0.75 choice probabilities. We removed participant #2’s data from further analysis because they performed the task incorrectly for unknown reasons; the psychometric functions did not follow the same pattern as all other participants. We performed repeated measures ANOVAs and Bonferroni-Holmes corrected paired t-tests on the PSEs and JNDs to compare across saccade vs fixation trials, left vs. right eye positions and single vs. multiple feature conditions.

3.4 Results

3.4.1 Location Feature Memory is Influenced by Saccades We investigated the effects of an intervening saccade on feature memory. In order to quantitatively assess performance, participant data were fit with a psychometric curve. The psychometric functions of both the fixation and saccade condition for the left and right eye positions are shown for participant 1 (Fig. 5A) as well as across all participants (Fig. 5B) for the single feature location condition. The PSE value indicates when the location of the 2nd bar is perceived to be at the same remembered location of the first bar and thus reflects where the participant remembers the 1st bar to be. For example, a leftward shift in the PSE signifies that the

1st bar was remembered to be slightly to the left of where it was for this participant. The JND provides information on the degree to which an observer can distinguish between differences in stimuli; smaller numbers indicate a better ability to distinguish between changes in stimuli or less uncertainty. Thus, a larger JND means less precision in the memory of the 1st bar. We performed two-way repeated-measures ANOVAs with final eye position (left or right) and condition

(saccade or fixation) as factors separately for PSE and JND. In terms of saccade trials, we chose final eye position as opposed to initial eye position arbitrarily. For PSE, we found a significant 29

main effect of eye position (F(1, 13)=5.2, p<0.05 and a significant interaction effect

(F(1,13)=12.3, p<0.01) but no significant condition effect (p>0.05). Bonferroni-Holm post-hoc paired t-tests showed no significant effect of eye position in the fixation condition (p>0.05), however, there was a significant difference for the saccade condition (t(13)=3.1, p<0.01).

Leftward saccades (to the left eye position) increased the tendency of participants to remember the 1st bar to be more right than it was (mean PSE = 0.66deg), whereas rightward saccades resulted in the opposite tendency (PSE = -0.9deg, Fig. 5B).

30

Figure 5. Location feature psychometric functions.

Psychometric functions are shown for the single feature location condition for a single participant (A) and across all participants (B) as well as for the multiple feature condition (C & D). Within each graph, the mean proportions (dots) and fitted psychometric functions (lines) are shown for each of the 4 trial types; left fixation (black solid line and circles), right fixation (gray solid line and circles), left saccade (black dashed line and open circles) and right saccade (gray dashed line and open circles).

For JND, there were a significant main effect of condition (F(1,13)=51, p<0.001) but no

significant main effect of eye position (p>0.05) nor any interaction effect (p>0.05). This analysis

shows that the JND was significantly larger in the saccade condition (mean JND = 1.014)

compared to the fixation condition (0.62) suggesting a degradation of memory of location with

31

saccades. Location memory was not worse with one eye position (left vs. right) compared to the other, which is not surprising as there is no reason to presume that objects locations in one visual field are remembered less well than the other for healthy participants. In summary, saccades induced biases in the remembered location of the object and degraded memory of their location.

We also tested whether saccades had the same effect on location when under increased memory load (Fig. 5C – P1, Fig. 5D – all participants), i.e., when performing the multiple feature condition in which participants had to remember all three features within a given trial. Similar to the location only condition, there was a significant main effect of PSE for eye position

(F(1,13)=13.9, p<0.01) and a significant interaction effect (F(1,13)=26, p<0.001) but no significant effect of condition (p>0.05). Post-hoc testing revealed a significant difference between eye position in the saccade condition once again (t(13)=4.4, p<0.001, mean left saccade PSE =

1.87deg, mean right saccade PSE = 1.74deg), but not in the fixation condition (p>0.05).

Similarly, there was a significant condition effect on JND (F(1,13)=49.5, p<0.001, mean fixation

JND = 0.889, mean saccade JND =1.695) but no eye position effect (p>0.05) nor any interaction effect (p>0.05). Thus, increasing memory load did not abolish the effect saccades had on remembering the location of the bar. Instead, saccades still resulted in a reduction of precision as well as a bias with respect to eye position, which was exaggerated with increased memory load.

3.4.2 Orientation Feature Memory is Influenced by Saccades Next, we investigated the effect of saccades on spatially invariant features. We first examined whether orientation memory was affected by saccades in the single feature orientation condition (Fig. 6A – all participants).

32

Figure 6. Orientation feature psychometric functions.

Psychometric functions are shown for the single orientation (A) and multiple (B) feature conditions across all participants. Data are plotted in the same manner as figure 3. The x-axis shows the relative orientation of the 1st and 2nd bars.

The intervening saccade between stimulus presentation did not significantly change PSE

(p>0.05) of the psychometric curve compared to the fixation trials. However, there was a significant effect of eye position (F(1,13)=7.5, p<0.05) across both fixation and saccade trials, where participants remembered the bar to be oriented more clockwise at the (final) left eye

33

position (mean PSE = 0.47deg) and more counterclockwise for the (final) right eye position

(mean PSE = -0.16deg). There was no interaction effect (p>0.05).

There was a significant increase in JND during saccade trials (mean JND = 2.2) compared to the fixation trials (mean JND = 1.5, F(1,13)= 44, p<0.001), but no eye position effect (p>0.05), nor an interaction effect (p>0.05).

When orientation had to be recalled in conjunction with other features (Fig. 6B), we again found no main effect on PSE of saccades compared to fixation (p>0.05), nor a significant effect of eye position (p>0.05), however there was a significant interaction effect (F(1,13)=5.2, p<0.05). Post-hoc testing revealed significant differences in the PSE between left (mean PSE =

0.42deg) and right eye position (mean PSE = -0.31deg) for the saccade trials (t(13)=2.97, p<0.05) but not for fixation trials (p>0.05). Thus, like the feature only condition, there was a tendency to remember the bar to be oriented more clockwise for left eye position compared to the right eye position, but in the increased memory load condition, this effect only held for saccade trials.

For the JND, like in the single feature condition, we only found a significant main effect of condition, with larger JNDs for the saccade condition (mean JND = 2.85, compared to the fixation condition (mean JND = 2.35, F(1,13)=11.1, p<0.01).

To summarize, in both the single and multiple feature paradigms, saccades biased the remembered orientation of the 1st bar according to the direction of the saccade. This was also the case for the single feature fixation condition. In addition, compared to fixation, saccades increased the uncertainty about the remembered orientation compared to fixation.

3.4.3 Size Feature Memory is Influenced by Saccades

We investigated the impact of saccades on memory of another spatially-invariant object feature, size (Fig. 7A). When participants were asked to remember just the size of the 1st bar, there was a significant effect of condition on PSE (F(1,13) = 37.2, p < 0.01), where the 1st bar was remembered to be smaller during saccade trials (-0.057°) than during fixation trials (0.05°).

34

There was no significant effect of eye position (p>0.05), but there was a significant interaction effect (F(1,13), = 4.8, p<0.05). Post-hoc testing revealed no difference between left and right fixation trials (p>0.05), whereas there was a significant but small difference between leftward (-

0.12°) and rightward (0.01°) saccade trials (t(13)=2.3, p<0.05).

Figure 7. Size feature psychometric functions.

Psychometric functions are shown for the single size (A) and multiple (B) feature conditions across all participants. The x-axis shows the relative size of the 1st and 2nd bars.

35

For JND in the single feature size condition, there was a significant effect of condition

(F(1,13)=5, p<0.05, fixation = 0.194, saccade = 0.238) as well as a significant eye position effect

(F(1,13)=7.8, p<0.05, leftward = 0.23°, rightward = 0.19°) but no significant interaction effect

(p>0.05).

When size had to be recalled in conjunction with other features, we again found an effect of condition for PSE (F(1,13)=6.7, p<0.05). Similar to the single feature condition, the size of the bar was remembered to be smaller during saccade trials (-0.032°) compared to during fixation trials (0.077°). Although the pattern in saccade trials in the multiple-features size condition was similar to that during the single feature size condition (Fig. 7A), we did not find a significant eye position effect (p>0.05) nor an interaction effect (p>0.05).

For JND in the multiple features condition, there was no significant effect of condition

(p>0.05) but there was a significant eye position effect (F(1,13)=4.8, p<0.05, leftward = 0.36°, rightward=0.26°) and no interaction effect (p>0.05). For both the single and multiple feature conditions, the size of the 1st bar was remembered to be smaller during saccade trials compared to during fixation. In addition, in the single feature condition only, the bar was remembered to be smaller during leftward final fixations compared to rightward ones. We also found an increase in the uncertainty of the 1st bar size but only during the single feature condition. Finally, there was an increase in uncertainty for leftward compared to rightward eye positions in both single and multiple feature conditions.

To summarize the results so far, we found that saccades changed the PSEs for all three features, location, orientation, and size. In addition, these effects seem to be independent of increasing memory load (1 feature vs. 3 features). These results demonstrate that the bar’s remembered location as well as its spatially invariant features (orientation and size) were influenced across saccades.

36

We also found that saccades increased the memory uncertainty of each of the features of the 1st bar in general in both the single and the multiple feature condition. It is possible that remembering information while performing an intervening saccade may induce changes in memory in a similar manner as increasing memory load. In other words, the intervening saccade might simply increase memory load because it is an additional task. Thus, it is this increase in memory load that results in the changes in the PSEs observed rather than being due to the saccade remapping mechanisms.

To test this, we compared the PSEs and the JNDs of the single feature to the multiple feature conditions for fixation trials. If the saccade only increases memory load, then we should see similar patterns of changes when the memory load is increased without saccades as when a saccade is performed.

3.4.4 Feature Memory Precision is Reduced but not Biased by Increasing Memory Load We used two-way repeated measures ANOVAs with number of features recalled (1 vs. 3. e.g. Fig. 5B vs Fig. 5D) and eye position as factors only for fixation trials.

We found that increasing memory load did not significantly change the PSE for any of the three features (all, p>0.05), nor did eye position have an effect (p>0.05), nor were there any interaction effects (p>0.05).

For JND, we found only main effects for the number of features recalled for location

(F(1,13)=14.8, p<0.01, single = 0.62°, multiple = 0.89°) and orientation (F(1,13)=54.8, p<0.001, single = 1.5°, multiple = 2.4°) but no effect of eye position (p>0.05), nor any interaction effects

(p>0.05) for either feature. For size, we found no effect of number of features to be remembered

(p>0.05) and no interaction effect (p>0.05) but there was a small but significant eye position effect (F(1,13)=6.7, p<0.05, leftward = 0.27°, rightward = 0.21°) as was also found earlier.

The null effects on the PSEs with increased memory loss suggest that the changes seen in the PSEs during saccade trials are due to updating mechanisms rather than memory load.

37

However, we found increases in the JND both during saccade trials and during increased memory load during fixation trials (at least for location and orientation). These findings suggest that in addition to updating effects, saccades also increase memory load.

3.4.5 Order of Recall Questions Finally, we also tested whether the order of the questions had any effect on memory when all three features were to be remembered at the same time. We used one-way repeated-measures

ANOVAs separately for each feature with order as a factor (order 1 vs 2 vs 3) collapsed across condition (saccades vs fixation). We found no effect of order on PSE (Fig. 8A-C) for any of the features (p>0.05). For JND (Fig. 8D-F), we found a significant effect of order for location

(F(2,26)=4.8, p<0.05). Post-hoc analysis revealed that memory was significantly worse when the location feature question was asked last as opposed to first (p<0.05), but there were no other significant differences. There was no effect of order on either orientation (p>0.05) or size

(p>0.05). In summary, the order of questions had no effect on the PSE and had a small effect of increasing uncertainty of the recall of the location of the bar.

38

Question Order

Figure 8. Question order. The PSEs (A-C) and JNDs (D-F) are shown for the multiple feature condition, separated by location (A&D), orientation (B&E) and size (C&F), separated by the order of the question, e.g. in A, 1 signifies that the location question was first, 2 signifies that the location question was second (with either size or orientation being first), etc. The bars depict the mean PSEs and JNDs and the error bars are the s.e.m. across participants. * = p<0.05

39

3.5 Discussion

Saccades biased the remembered location, orientation, and size of the bar dependent on the direction of the saccade. Overall these saccade-induced biases remained similar even when increasing memory load. We propose that saccadic updating mechanisms influence both spatially variant and invariant remembered features of an object suggesting a link between object location and features in memory. In addition, saccades increased uncertainty in the perceptual report possibly due to contributions from saccade-induced memory load and/or extra-retinal noise.

3.5.1 Saccade Induce Biases in the Remembered Location of an Object

Saccades were observed to induce biases in all three remembered features, both spatially variant and invariant. The largest effect of the intervening saccade was an overestimation of the remembered bar’s location away from the direction of the saccade. Similar overestimations of target location have also been observed during tasks requiring actions, such as pointing

(Henriques et al., 1998). Therefore, it is possible that the observed overshooting effects could be a result of inaccurate updating from an overcompensation of the saccade amplitude. However, this hypothesis is unlikely as many studies have observed similar biases with perceptual reports and have attributed them to visual space compression towards the fixation point at the time of encoding (Golomb & Kanwisher, 2012b; Sheth & Shimojo, 2001; van der Heijden, van der Geest, de Leeuw, Krikke, & Müsseler, 1999). In agreement with Golomb and Kanwisher (2011) our results support the idea that remembered locations are updated accurately across saccades with shifts in PSE being attributed to the encoding process rather than the update.

3.5.2 Non-spatial Features of the Object are also Influenced by the Intervening Saccade We found that the intervening saccade also influenced both the orientation and the size of the object. The effect on remembered orientation was observed under the higher memory load condition with the bar being remembered more clockwise for leftward saccades and more

40

counterclockwise for rightward saccades. In addition, the size of the bar was consistently remembered to be smaller when a saccade intervened before recall.

To confirm that the abovementioned effects were related to updating rather than attributable to increased memory load, we compared single-feature to multiple-feature trials during fixation. We were unable to replicate any of the biases observed during the saccade trials when memory load was increased for fixation trials. Based on this, we conclude that the effects observed were specific to the intervening saccade, implicating updating mechanisms.

These changes in the perceived orientation and size of the bar due to the saccade are remarkable because they imply that updating mechanisms distort the remembered features of the object even when they are spatially-invariant. In a recent study on trans-saccadic memory, Prime et al., (2007) observed increased errors in remembering the luminance or orientation of targets as the size of the saccade increased. Our results thus point towards the idea that non-spatial features of an object are influenced by the intervening saccade and thus, must be bound or linked together with their location in some manner (Serences & Yantis, 2006; Treisman, 1996; Schneegans &

Bays, 2017; Nissen, 1985; Kahneman et al., 1992), consistent with the hypothesis that trans- saccadic integration also involves the remapping of object features (Cavanagh et al., 2010;

Melcher & Colby, 2008; Prime et al., 2006, 2011). It has been suggested that VSTM of different object features are stored in independent memory stores (Bays et al., 2011; Pasternak & Greenlee,

2005; Wheeler & Treisman, 2002), based on observations of mis-binding, where features of an object are remembered to be part of another object (Allen et al., 2006; Robertson, 2003;

Treisman, 1998; Treisman & Schmidt, 1982; Wheeler & Treisman, 2002; Wolfe & Cave, 1999).

Our results do not contradict these findings as we did not test multiple objects; however, they do imply a tight linkage in the memory of the different features of the same object with their location, since all features were biased by the eye movement. This tight linkage to location makes sense as binding is successfully performed the majority of the time, and mis-binding occurs only

41

under forced experimental conditions and even then, not all the time (Treisman, 1977) or only in the case of brain lesions (Cohen, Rafal, & Rafal, 1991; Humphreys, Cinel, Wolfe, Olson, &

Klempen, 2000; Robertson, 2003). Our results are in line with studies proposing that information from the same location is either implicitly bound together, resulting in adaptation aftereffects specific to congruently presented features as in the McCollough effect ( McCollough, 1965;

Wolfe & Cave, 1999) or that spatial attention is used to bind features of an object to its location

(Cave & Wolfe, 1990; Reynolds & Desimone, 1999; Theeuwes et al., 2011; Treisman & Gelade,

1980; Treisman, 1998). Even the notion of independent memory stores of individual features does not preclude that they are nevertheless linked through location. After all, the brain somehow needs to piece together feature information computed in different brain areas to know which visual objects have what specific features (see below). In agreement with this view, Schneegans

& Bays (2017) have recently suggested that non-spatial features are bound to the object’s location, yet are independent of one another. This is consistent with our findings that the intervening saccade influences non-spatial features due to being linked to the updated location of the object.

3.5.3 Specific Biases on Spatially-Invariant Features We observed specific but small biases in the remembered orientation and size after saccades. For orientation, participants remembered the top of the bar to be oriented towards the final fixation position after the saccade. The size of the bar was remembered to be smaller during saccade trials than during fixation trials as well as for leftward vs. rightward saccades. We confirmed overall that there were no differences in the other features that could explain observed differences in each of the features, e.g. average location of bars for leftward saccades was different from rightward saccades. These biases cannot be explained by noise induced by a saccade, as this would not cause directional biases but rather an increase in uncertainty (discussed below). An alternative explanation could be that the observed biases are related to the

42

remembered location of the object, in that it was remembered to be further away from fixation after a saccade. For example, it has been recently demonstrated that an object is perceived to be smaller the further away it is in the periphery (Baldwin, Burleigh, Pepperell, & Ruta, 2016).

Therefore, observed bias in size reports can be explained by a peripheral mis-localization of the first object further toward the periphery. However, we do not believe this is the case because the overestimation of location was relatively equal for location (approx. 2° in either direction), whereas the biases for orientation and size were different for leftward vs. rightward saccades (e.g.

-0.12° leftward vs 0.01° rightward for size). Moreover, even if this was the case, it would still imply that object features were linked and updated across saccades, as the updated object location influenced the remembered size. In sum, regardless of the specific bias, the emergence of a bias specific to saccade trials supports the notion that the features of the object are linked and updated together across saccades.

3.5.4 Saccades Increase Memory Load Apart from the biases induced due to the saccade, we also found that saccades increased the JNDs or uncertainty in a similar manner as increasing memory load during fixation trials. It has been suggested that planning and executing saccades may increase memory load (Bays & Husain,

2008; Shao et al., 2010). For example, Tas et al., (2016) demonstrated that saccades resulted in reduced memory capacity for task-relevant objects, presumably due to automatic encoding of the task-irrelevant saccade target into memory. However, the increased uncertainty we observed may not necessarily/exclusively be due to increased memory load. If memory representations are stored retinotopically and updated during a saccade, a noisy integration of extra-retinal information may also contribute to uncertainty (Baker, 2003; Golomb & Kanwisher, 2012; Henriques et al., 1998;

Prime et al., 2007). Whether both these factors lead to the observed increase in uncertainty may be dependent on whether the mode of storage utilized by VSTM in this context is continuous (shared), or discrete. VSTM has been proposed to be comprised of either a limited (generally 4) number of

43

discrete memory slots (Cowan, 2011; Luck & Vogel, 1997; Pashler, 1988), alternatively continuous but shared memory resources rather than discrete slots (Alvarez & Cavanagh, 2004; Bays &

Husain, 2008; Wilken & Ma, 2004), or a hybrid of the two (Cowan & Rouder, 2009; Donkin et al.,

2013; Zhang & Luck, 2008). With shared representations, both updating and memory load (Alvarez

& Cavanagh, 2004; Fougnie et al., 2010; Oberauer & Eichenberger, 2013; Wheeler & Treisman,

2002) may explain increased uncertainty, whereas with the slot hypothesis increases in uncertainty should be exclusively due to updating noise since memory is not saturated (Bays & Husain, 2008;

Prime et al., 2007).

Our findings from the fixation trials however support a continuous shared model of memory resources; we found that JNDs increased when remembering three features compared to one feature. Previous studies have shown that increasing the number of remembered features of the same object shows similar limits as when increasing the number of objects (Alvarez &

Cavanagh, 2004; Bays et al., 2011; Fougnie et al., 2010; Oberauer & Eichenberger, 2013;

Wheeler & Treisman, 2002). For example, two studies have shown that increasing memory load decreases performance for complex objects (Alvarez & Cavanagh, 2004; Olsson & Poom, 2005).

Along these lines, Bays, Wu and Husain (2011) suggested that like objects, the memory of multiple features within an object also degrades with increasing memory load for visual working memory. In contrast, supporters of the discrete slot hypothesis propose a feature cost-free integrated object account of memory, where remembering multiple features does not add memory cost (Luck & Vogel, 1997). Considering this, the observed effect of saccades on JND is likely due to a combination of both noisy updating and shared memory allocation.

3.5.5 Eye-centered Representation of Remembered Object Location The overestimation of the target’s location after a saccade has been demonstrated to reflect an eye-centred representation of location (Golomb & Kanwisher, 2012; Henriques et al.,

1998; Prime et al., 2007). An eye-centred reference frame for trans-saccadic memory has also 44

been demonstrated by others (Hayhoe et al., 1991; Prime et al., 2007; Verfaillie, 1997). We asked our participants to perform the task in complete darkness, thus ensuring that no allocentric references were available (such as the monitor frame). We biased the task such that updating in our task would likely take place in eye-centred coordinates. Complete darkness necessitated that eye position had to be used to estimate the location of the bar. This manipulation revealed biases in the remembered non-spatial features of the bar and supports the idea that object locations are updated for perception.

3.5.6 Underlying Neurological Mechanisms The influence of the intervening saccade on both object location and spatially invariant features implies interactions between areas known to be in involved in updating and those involved in memory and perception, or alternatively but not exclusively, brain areas that are involved in all these processes. Updating processes have been demonstrated using single cell recording, imaging, lesion and stimulation studies in multiple brain areas including the parietal cortex, extra-striate and striate cortices (Colby, Duhamel, & Goldberg, 1995; Duhamel, Colby, &

Goldberg, 1992; Khan, Pisella, Rossetti, Vighetto, & Crawford, 2005; Medendorp, Goltz, Vilis,

& Crawford, 2003; Merriam, Genovese, & Colby, 2006; Merriam, Genovese, & Colby, 2003). In terms of trans-saccadic memory, multiple areas including the early , parietal cortex and frontal cortex have been shown to be involved in remembering the location of objects across saccades (Malik, Dessing, & Crawford, 2015; Prime et al., 2008, 2010; Tanaka, Dessing, Malik,

Prime, & Crawford, 2014). Recently, the parietal cortex has also been implicated in the updating and memory of shape information across saccades (Subramanian & Colby, 2014). Other areas in the brain such as extra-striate areas particularly, area V4 have been shown to be sensitive to visual features as well as location (De Beeck & Vogels, 2000), and receive information from saccade related areas (Burrows, Zirnsak, Akhlaghpour, Wang, & Moore, 2014; Han, Xian, &

Moore, 2009; Moore & Armstrong, 2003). Interestingly, a recent study showed the involvement

45

of multiple areas including the right inferior parietal cortex and extra-striate areas (including V4) during a task where participants were required to remember, update and report object orientation

(Dunkley, Baltaretu, & Crawford, 2016). Finally, these same areas also encompass the network of brain regions involved in working memory (Eriksson, Vogel, Lansner, Bergström, & Nyberg,

2015; Gazzaley, Rissman, & D’Esposito, 2004). In summary, the large overlap and interactions between brain areas involved in location and feature updating and memory processes supports the idea of a linkage between updating and memory processes.

3.6 Conclusions

We tested how well participants remembered spatially variant and invariant features of an object across saccades to determine how features of an object are updated across saccades. We observed that saccades induced systematic biases and increased uncertainty when remembering the object’s location but also its orientation and size. We conclude that all features of an object are linked and updated together across saccades.

46

Chapter 4

General Discussion

4.1 Summary of Results and Interpretation

The goal of this research was to investigate the properties underlying TSM of objects with multiple features. To this aim, we designed an experiment requiring participants to store and maintain object feature properties (either alone or in conjunction) across saccades. With this design, we could distinguish between memory load effects and saccade-related spatial updating effects (Shao et al., 2010). We observed that saccades induced uncertainty in addition to biases in the perceptual report of object features. Location biases could be explained by a foveal bias within the first fixation frame, however non-spatial feature biases may be specifically attributable to the intervening saccade. These data provide evidence for our first hypothesis in Section 1.5, consistent with a representation of object features that is tightly linked to its spatial memory. In addition, through our feature conjunction condition, we gained insight into the properties of object feature storage in memory. We observed that feature conjunctions reduced JNDs

(precision) of feature memory performance across all features; saccades induced additional uncertainty. Consistent with Bays et al. (2009), this supports a resource allocation model of

VSTM. However, we caution that our results only apply to remembering single objects across saccades, encoding multiple objects prior to a saccade may reveal additional memory dynamics not apparent in this experimental design. Together, our data supports a model of TSM where object feature memory is corrupted by noisy spatial updating across saccades and that this memory is burdened by increasing object complexity (Fig. 9). This is the first study of its kind to investigate feature memory across saccades whilst accounting for uncertainty in memory.

47

Figure 9: Proposed model of spatial updating. Objects are represented as non-spatial features bound to their spatial location. A saccade induces spatial updating via corollary discharge. Non-spatial feature memory is also influenced by this process resulting in both an increase in uncertainty (σ) and a bias in feature memory.

4.2 Implications for Trans-saccadic Perception

Numerous studies have shown that feature perception upon foveation is a result of integrating pre-saccadic peripheral information and post-saccadic foveal information (Herwig,

2015; Oostwoud Wijdenes, Marshall, & Bays, 2015; Pollatsek et al., 1984; Schotter et al., 2012;

Wittenberg, Bremmer, & Wachtler, 2008). In addition, Demeyer et al. (2010) showed that with independently sampled pre-saccadic and post-saccadic orientations, the post-saccadic percept distribution was unimodal; with a mean somewhere in between the true orientation distributions.

Thus, the current data argues that pre-saccadic percepts are used to facilitate processing across saccades. Our study adds an important caveat to this idea: the pre-saccadic percept after a saccade is not equivalent to what was encoded at the time of peripheral viewing. In addition to being noisier (less precise), orientation memory was rotated dependent on saccade direction and people consistently underestimated the size of the remembered object. With integration being dependent on TSM, this suggests that trans-saccadic feature integration is additionally burdened by the effect of saccades on feature memory (Prime et al., 2011). The effect of memory reliability has 48

already been characterized using a Bayesian framework (Ganmor, Landy, & Simoncelli, 2015;

Wolf & Schütz, 2015). The findings in these studies suggest that the post-saccadic percept reported by humans was close to a Bayes optimal integration of pre- (peripheral) and post- saccadic (foveal) information. Wolf & Schütz (2015) observed that although human JNDs matched ideal observer JNDs, the peripheral weight computed from human data was consistenly lower than that of an ideal observer. This was interpreted to indicate that humans deviate from optimality via a foveal bias. Integrating our findings with this study raises some key predictions:

1. Humans may be closer to the ideal observer than implied by Ganmor et al. (2015)

and Wolf & Schütz (2015) — since saccades induce additional uncertainty in

memory. This may explain why humans were found to weigh pre-saccadic peripheral

information less than what the ideal observer model predicted (Wolf & Schütz,

2015).

2. We observed differential biases and uncertainties induced by saccades on feature

memory; suggesting independent representations of feature information. If trans-

saccadic integration occurs independently for multiple features, we should expect

different integration weights for each feature; this would provide strong evidence

against an integrated object representation in memory.

3. Finally, Prime et al. (2007) observed that the proportion of error trials in a change

detection task increased with saccade-amplitude. Our results indicate that saccades

increase JND, thus we should expect that peripheral weighting should decrease as a

function of increasing saccade amplitude. We note that this is difficult to test, when

the saccade target matches the remembered object, the effect of saccade amplitude is

confounded by larger retinal eccentricity during peripheral viewing. Thus, any

investigation examining the effect of saccade amplitude should carefully consider

retinal eccentricity in the design of their experiment. 49

Testing these predictions may provide additional insight into how the cost of spatial updating on TSM may directly influence trans-saccadic perception.

4.3 Future Directions

The insights gained within this study provide opportunity for several future investigations for studying VSTM and TSM, as outlined in the following sections:

4.3.1 Quantifying the Contributions of Saccades and Memory Load to Uncertainty First, our finding that saccades induce uncertainty (increased JND) is consistent with automatic allocation of memory toward a saccade target (Bays & Husain, 2008; Shao et al.,

2010). As mentioned in Section 3.5.4, the increase in JND given a resource allocation model may be a result of both extra-retinal signal noise and memory load. However, with the current design, we could not delineate contributions of these sources to the observed increase in JND from an intervening saccade. We propose an additional experimental parameter of varying saccade amplitudes to quantify the contributions of corollary discharge to uncertainty. Bansal et al.,

(2016) performed a similar experiment as Cavanaugh et al., (2016) with human subjects and varying saccade amplitudes. Consistent with our findings and Golomb & Kanwisher (2012), participants reported the target item as being closer to the initial fixation point. Critically, precision decreased with increasing saccadic amplitude even when the saccade target corresponded to the stimulus to be remembered. Congruency between the saccade target and remembered spatial location rule out memory load effects, thus any increase in JND should be solely attributable to extra-retinal signals. This is consistent with a signal-dependent noise property of spatial updating (Bansal et al., 2015; Jayet Bray et al., 2016; Niemeier et al., 2003;

Prime et al., 2007). Subsequently, if features are updated with extra-retinal signals, this saccade amplitude effect on JND should hold. Conversely, if JND for features are independent of saccade amplitude it would argue against a common mechanism for location and feature updating

(Cavanagh et al., 2010). Could saccade amplitude related effects on JND in Bansal et al., (2016) 50

be related to targets requiring larger saccades being further in the periphery? It is likely as retinal eccentricity is closely related to spatial and non-spatial uncertainty (Hess & Field, 1993; Michel

& Geisler, 2011). To better study the effect of spatial updating, models of uncertainty from eccentricity effects should be considered. Nonetheless, we note that in a feature updating task, the actual spatial memory of the target may be irrelevant as long as the eccentricity of the remembered object is constant across varying saccadic amplitudes.

4.3.2 Cross-Feature Binding Across Saccades Second, multiple studies suggest that features errors are independent from each other in

VSTM (Schneegans & Bays, 2017; Bays & Husain, 2011; Fougnie & Alvarez, 2011). However, these studies require participants to report information while in the same retinotopic coordinate space (no saccades). Performing memory tasks while in the same coordinate space may allow for leveraging of recurrent dynamics in early visual areas (Wang, 2001). Harrison & Tong (2009), applied a linear classifier onto functional data of areas V1, V2, V3, V3A/V4 during a working memory experiment involving orientation memory. During the retention interval, classification performance using functional data of all areas exceeded 80%, indicating that residual activity after stimulus presentation contains information about the remembered stimulus. In addition, only features that are relevant to successfully perform recall are encoded in V1 during the retention period, possibly implicating attentional selection for encoding into VSTM (Serences, Ester,

Vogel, & Awh, 2009). Representation of features in lower-level visual areas is thought to be separately coded (Dubner & Zeki, 1971; Friedman-Hill, Robertson, & Treisman, 1995;

Livingstone & Hubel, 1988). Together, these data support theories of re-entrant processing contributing to recall during visual memory tasks, possibly conferring independence of feature memory to VSTM (Di Lollo, 2012). Whether recurrent activation in early sensory areas is maintained across saccades has yet to be explored. If this process does continue across saccades it would suggest an extremely distributed organization of VSTM within the visuo-motor chain. An

51

analysis of feature dependencies within our experimental design may extend previous findings or indicate a special role for saccades in integrating object features. We note that this type of analysis requires many trials of pairwise combinations of size and orientation features in our experiment. Our feature sets during the task were generated to maximize the sampling of data from the centre of the psychometric curve. Thus, our current data is insufficient to statistically verify whether dependencies exist for memory across feature dimensions.

4.4 Conclusion

In summary, our data extends upon the current understanding of how saccades influence perception through implicating spatial updating of multiple features independently in memory. In addition, we uncovered biases in feature perception induced by saccades that have not been investigated prior to this study. Finally, the results in our study raise some key predictions for

Bayesian models of trans-saccadic perception in addition to the specific mechanism of feature updating across saccades. Testing these predictions will improve current models of TSM and aide in the understanding of neurophysiological phenomena associated with spatial updating.

52

References

Allen, R. J., Baddeley, A. D., & Hitch, G. J. (2006). Is the binding of visual features in

working memory resource-demanding? Journal of Experimental Psychology:

General, 135(2), 298–313.

Allen, R. J., Hitch, G. J., & Baddeley, A. D. (2009). Cross-modal binding and working

memory. Visual Cognition, 17(1–2), 83–102.

Alvarez, G. A., & Cavanagh, P. (2004). The capacity of visual short-term memory is set

both by visual information load and by number of objects. Psychological Science,

15(2), 106–111.

Awh, E., Jonides, J., Smith, E. E., Schumacher, E. H., Koeppe, R. A., & Katz, S. (1996).

Dissociation of Storage and Rehearsal in Verbal Working Memory: Evidence From

Positron Emission Tomography. Psychological Science, 7(1), 25–31.

Awh, E., Vogel, E. K., & Oh, S.-H. (2006). Interactions between attention and working

memory. Neuroscience, 139(1), 201–208.

Baddeley, A. D., & Hitch, G. J. (1974). Working Memory (pp. 47–89).

Baddeley, A. D., & Logie, R. H. (1999). Working Memory: The Multiple-Component

Model. In A. Miyake & P. Shah (Eds.), Models of Working Memory (pp. 28–61).

Cambridge: Cambridge University Press.

Baker, J. T. (2003). Spatial memory following shifts of gaze. I. Saccades to memorized

world-fixed and gaze-fixed targets. Journal of Neurophysiology, 89(5), 2564–2576.

53

Baldwin, J., Burleigh, A., Pepperell, R., & Ruta, N. (2016). The perceived size and shape

of objects in . I-Perception, 7(4), 1–23.

Bansal, S., Jayet Bray, L. C., Peterson, M. S., & Joiner, W. M. (2015). The effect of

saccade metrics on the corollary discharge contribution to perceived eye location,

113(9).

Bays, P. M., Catalao, R. F. G., & Husain, M. (2009). The precision of visual working

memory is set by allocation of a shared resource. Journal of Vision, 9(10), 7.1-11.

Bays, P. M., & Husain, M. (2008). Dynamic shifts of limited working memory resources

in human vision. Science, 321(August), 2006–2009.

Bays, P. M., Wu, E. Y., & Husain, M. (2011). Storage and binding of object features in

visual working memory. Neuropsychologia, 49(6), 1622–1631.

Brady, T. F., & Tenenbaum, J. B. (2013). A probabilistic model of visual working

memory: Incorporating higher order regularities into working memory capacity

estimates. Psychological Review, 120(1), 85–109.

Bridgeman, B., & Mayer, M. (1983). Failure to integrate visual information from

successive fixations. Bulletin of the Psychonomic Society, 21(4), 285–286.

Bridgeman, B., & Stark, L. (1991). Ocular proprioception and efference copy in

registering visual direction. Vision Research, 31(11), 1903–13.

Bridgeman, B., Van der Heijden, A. H. C., & Velichkovsky, B. M. (1994). A theory of

visual stability across saccadic eye movements. Behavioral and Brain Sciences, 54

17(2), 247.

Broadbent, D. E., & Broadbent, M. H. P. (1987). From detection to identification:

Response to multiple targets in rapid serial visual presentation. Perception &

Psychophysics, 42(2), 105–113.

Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97(4), 523–47.

Burrows, B. E., Zirnsak, M., Akhlaghpour, H., Wang, M., & Moore, T. (2014). Global

selection of saccadic target features by neurons in area V4. Journal of Neuroscience,

34(19), 6700–6706.

Cavanagh, P., Hunt, A. R., Afraz, A., & Rolfs, M. (2010). Visual stability based on

remapping of attention pointers. Trends in Cognitive Sciences, 14(4), 147–53.

Cavanaugh, J., Berman, R. A., Joiner, W. M., & Wurtz, R. H. (2016). Saccadic Corollary

Discharge Underlies Stable Visual Perception. The Journal of Neuroscience, 36(1),

31–42.

Cave, K. R., & Wolfe, J. M. (1990). Modeling the role of parallel processing in visual

search. Cognitive Psychology, 22(2), 225–271.

Chelazzi, L., Miller, E. K., Duncan, J., & Desimone, R. (1993). A neural basis for visual

search in inferior temporal cortex. Nature, 363(6427), 345–347.

Chun, M. M., & Potter, M. C. (1995). A two-stage model for multiple target detection in

rapid serial visual presentation. Journal of Experimental Psychology. Human

Perception and Performance, 21(1), 109–27. 55

Cocchini, G., Logie, R. H., Sala, S. Della, MacPherson, S. E., & Baddeley, A. D. (2002).

Concurrent performance of two memory tasks: Evidence for domain-specific

working memory systems. Memory & Cognition, 30(7), 1086–1095.

Cohen, A., Rafal, R. D., & Rafal, R. D. (1991). Attention and feature integration: Illusory

conjunctions in a patient with a parietal lobe lesion. Psychological Science, 2(2),

106–110.

Colby, C. L., Duhamel, J. R., & Goldberg, M. E. (1995). Oculocentric spatial

representation in parietal cortex. Cerebral Cortex, 5(5), 470–81.

Cowan, N. (2001). The magical number 4 in short-term memory: a reconsideration of

mental storage capacity. The Behavioral and Brain Sciences, 24(1), 87-114–85.

Cowan, N. (2011). The focus of attention as observed in visual working memory tasks:

Making sense of competing claims. Neuropsychologia, 49(6), 1401–1406.

Cowan, N., & Rouder, J. N. (2009). Comment on “ Dynamic Shifts of. Science,

323(5916), 877.

Crespi, S., Biagi, L., d’Avossa, G., Burr, D. C., Tosetti, M., & Morrone, M. C. (2011).

Spatiotopic Coding of BOLD Signal in Human Visual Cortex Depends on Spatial

Attention. PLoS ONE, 6(7), e21661. d’Avossa, G., Tosetti, M., Crespi, S., Biagi, L., Burr, D. C., & Morrone, M. C. (2007).

Spatiotopic selectivity of BOLD responses to visual motion in human area MT.

Nature Neuroscience, 10(2), 249–255.

56

Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience : computational and

mathematical modeling of neural systems. Massachusetts Institute of Technology

Press.

De Beeck, H. O., & Vogels, R. (2000). Spatial sensitivity of macaque inferior temporal

neurons. Journal of Comparative Neurology, 426(4), 505–518.

Delvenne, J., & Bruyer, R. (2004). Does visual short‐term memory store bound features?

Visual Cognition, 11(1), 1–27.

Demeyer, M., De Graef, P., Wagemans, J., & Verfaillie, K. (2010). Parametric

integration of visual form across saccades. Vision Research, 50(13), 1225–1234.

Deubel, H., Schneider, W. X., & Bridgeman, B. (2002). Transsaccadic memory of

position and form. In Progress in brain research (Vol. 140, pp. 165–180).

Di Lollo, V. (2012). The feature-binding problem is an ill-posed problem. Trends in

Cognitive Sciences, 16(6), 317–321.

Donkin, C., Nosofsky, R. M., Gold, J. M., & Shiffrin, R. M. (2013). Discrete-slots

models of visual working-memory response times. Psychological Review, 120(4),

873–902.

Dubner, R., & Zeki, S. M. (1971). Response properties and receptive fields of cells in an

anatomically defined region of the superior temporal sulcus in the monkey. Brain

Research, 35(2), 528–32.

Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1992). The updating of the 57

representation of visual space in parietal cortex by intended eye movements.

Science, 255(5040), 90–2.

Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity.

Psychological Review, 96(3), 433–58.

Duncan, J., Ward, R., & Shapiro, K. L. (1994). Direct measurement of attentional dwell

time in human vision. Nature, 369(6478), 313–315.

Dunkley, B. T., Baltaretu, B., & Crawford, J. D. (2016). Trans-saccadic interactions in

human parietal and occipital cortex during the retention and comparison of object

orientation. Cortex, 82, 263–276.

Dunn, C. A., Hall, N. J., & Colby, C. L. (2010). Spatial Updating in Monkey Superior

Colliculus in the Absence of the Forebrain Commissures: Dissociation Between

Superficial and Intermediate Layers. Journal of Neurophysiology, 104(3), 1267–

1285.

Eriksson, J., Vogel, E. K., Lansner, A., Bergström, F., & Nyberg, L. (2015).

Neurocognitive architecture of working memory. Neuron, 88(1), 33–46.

Fougnie, D., & Alvarez, G. A. (2011). Object features fail independently in visual

working memory: evidence for a probabilistic feature-store model. Journal of

Vision, 11(12).

Fougnie, D., Asplund, C. L., & Marois, R. (2010). What are the units of storage in visual

working memory? Journal of Vision, 10(12), 27–27.

58

Fougnie, D., Cormiea, S. M., & Alvarez, G. A. (2013). Object-based benefits without

object-based representations. Journal of Experimental Psychology: General, 142(3),

621–626.

Fougnie, D., & Marois, R. (2011). What limits working memory capacity? Evidence for

modality-specific sources to the simultaneous storage of visual and auditory arrays.

Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6),

1329–1341.

Friedman-Hill, S. R., Robertson, L. C., & Treisman, A. M. (1995). Parietal contributions

to visual feature binding: evidence from a patient with bilateral lesions. Science,

269(5225), 853–5.

Frund, I., Haenel, N. V., & Wichmann, F. A. (2011). Inference for psychometric

functions in the presence of nonstationary behavior. Journal of Vision, 11(6), 16–16.

Ganmor, E., Landy, M. S., & Simoncelli, E. P. (2015). Near-optimal integration of

orientation information across saccades. Journal of Vision, 15(16), 8.

Gardner, J. L., Merriam, E. P., Movshon, J. A., & Heeger, D. J. (2008). Maps of visual

space in human occipital cortex are retinotopic, not spatiotopic. The Journal of

Neuroscience, 28(15), 3988–99.

Gaymard, B., Rivaud, S., & Pierrot-Deseilligny, C. (1994). Impairment of extraretinal

eye position signals after central thalamic lesions in humans. Experimental Brain

Research, 102(1), 1–9.

59

Gazzaley, A., Rissman, J., & D’Esposito, M. (2004). Functional connectivity during

working memory maintenance. Cognitive, Affective & Behavioral Neuroscience,

4(4), 580–99.

Germeys, F., De Graef, P., & Verfaillie, K. (2002). Transsaccadic perception of saccade

target and flanker objects. Journal of Experimental Psychology: Human Perception

and Performance, 28(4), 868–883.

Goldberg, M. E., Gottlieb, J. P., & Kusunoki, M. (1998). The representation of visual

salience in monkey parietal cortex. Nature, 391(6666), 481–484.

Golomb, J. D., Chun, M. M., & Mazer, J. A. (2008). The native coordinate system of

spatial attention is retinotopic. The Journal of Neuroscience, 28(42), 10654–62.

Golomb, J. D., & Kanwisher, N. (2012a). Higher level visual cortex represents

retinotopic, not spatiotopic, object location. Cerebral Cortex, 22(12), 2794–2810.

Golomb, J. D., & Kanwisher, N. (2012b). Retinotopic memory is more precise than

spatiotopic memory. Proceedings of the National Academy of Sciences, 109(5),

1796–1801.

Golomb, J. D., Nguyen-Phuc, A. Y., Mazer, J. A., McCarthy, G., & Chun, M. M. (2010).

Attentional facilitation throughout human visual cortex lingers in retinotopic

coordinates after eye movements. The Journal of Neuroscience, 30(31), 10493–506.

Grimes, J. A. (1996). On the Failure to Detect Changes in Scene across Saccades. In E.

Atkins (Ed.), Perception (p. 339). Oxford University Press.

60

Han, X., Xian, S. X., & Moore, T. (2009). Dynamic sensitivity of area V4 neurons during

saccade preparation. Proceedings of the National Academy of Sciences, 106(31),

13046–13051.

Hardman, K. O., & Cowan, N. (2015). Remembering complex objects in visual working

memory: Do capacity limits restrict objects or features? Journal of Experimental

Psychology: Learning, Memory, and Cognition, 41(2), 325–347.

Harrison, S. A., & Tong, F. (2009). Decoding reveals the contents of visual working

memory in early visual areas. Nature, 458(7238), 632–5.

Hayhoe, M., Lachter, J., & Feldman, J. (1991). Integration of form across saccadic eye

movements. Perception, 20(3), 393–402.

Helmholtz, H. (1867). Handbuch der physiologischen Optik. Leipzig : Leopold Voss.

Henderson, J. M. (1994). Two representational systems in dynamic visual identification.

Journal of Experimental Psychology: General, 123(4), 410–426.

Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual

Review of Psychology, 50, 243–71.

Henderson, J. M., & Hollingworth, A. (2003). Eye movements and visual memory:

Detecting changes to saccade targets in scenes. Perception & Psychophysics, 65(1),

58–71.

Henderson, J. M., Pollatsek, A., & Rayner, K. (1987). Effects of foveal and

extrafoveal preview on object identification. Journal of Experimental Psychology. 61

Human Perception and Performance, 13(3), 449–63.

Henderson, J. M., & Siefert, A. B. C. (1999). The influence of enantiomorphic

transformation on transsaccadic object integration. Journal of Experimental

Psychology: Human Perception and Performance, 25(1), 243–255.

Henderson, J. M., & Siefert, A. B. C. (2001). Types and tokens in transsaccadic object

identification: effects of spatial position and left-right orientation. Psychonomic

Bulletin & Review, 8(4), 753–60.

Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-

centered remapping of remembered visual space in an open-loop pointing task. The

Journal of Neuroscience, 18(4), 1583–1594.

Herwig, A. (2015). Transsaccadic integration and perceptual continuity. Journal of

Vision, 15(16), 7.

Hess, R. F., & Field, D. (1993). Is the increased spatial uncertainty in the normal

periphery due to spatial undersampling or uncalibrated disarray? Vision Research,

33(18), 2663–2670.

Higgins, E., & Rayner, K. (2015). Transsaccadic processing: stability, integration, and

the potential role of remapping. Attention, Perception, & Psychophysics, 77(1), 3–

27.

Hollingworth, A., & Henderson, J. M. (1998, December). Does consistent scene context

facilitate object perception? Journal of Experimental Psychology: General.

62

Humphreys, G. W., Cinel, C., Wolfe, J. M., Olson, A., & Klempen, N. (2000).

Fractionating the binding process: neuropsychological evidence distinguishing

binding of form from binding of surface features. Vision Research, 40(10–12),

1569–96.

Ilg, U. J., Bridgeman, B., & Hoffmann, K.-P. (1989). Influence of mechanical

disturbance on oculomotor behavior. Vision Research, 29(5), 545–551.

Irwin, D. E. (1991). Information integration across saccadic eye movements. Cognitive

Psychology, 23(3).

Irwin, D. E. (1992). Memory for position and identity across eye movements. Journal of

Experimental Psychology: Learning, Memory, and Cognition.

Irwin, D. E. (1998). Eye Movements, Attention and Trans-saccadic Memory. Visual

Cognition, 5(1–2), 127–155.

Irwin, D. E., & Andrews, R. V. (1996). Integration and accumulation of information

across saccadic eye movements. In Attention and Performance XVI: Information

Integration in Perception and Communication (pp. 125–154). The MIT Press.

Irwin, D. E., Yantis, S., & Jonides, J. (1983). Evidence against visual integration across

saccadic eye movements. Perception & Psychophysics, 34(1), 49–57.

Jayet Bray, L. C., Bansal, S., & Joiner, W. M. (2016). Quantifying the spatial extent of

the corollary discharge benefit to transsaccadic visual perception. Journal of

Neurophysiology, 115(3).

63

Johnson, M. K., Verfaellie, M., & Dunlosky, J. (2008). Introduction to the special section

on integrative approaches to source memory. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 34(4), 727–729.

Jolicœur, P., Brisson, B., & Robitaille, N. (2008). Dissociation of the N2pc and sustained

posterior contralateral negativity in a choice response task. Brain Research, 1215,

160–172.

Jonides, J., Irwin, D. E., & Yantis, S. (1982). Integrating visual information from

successive fixations. Science, 215(4529).

Jonides, J., Smith, E. E., Koeppe, R. A., Awh, E., Minoshima, S., & Mintun, M. A.

(1993). Spatial working memory in humans as revealed by PET. Nature, 363(6430),

623–625.

Khan, A. Z., Pisella, L., Rossetti, Y., Vighetto, A., & Crawford, J. D. (2005). Impairment

of gaze-centered updating of reach targets in bilateral parietal-occipital damaged

patients. Cerebral Cortex, 15(10), 1547–1560.

Klaver, P., Talsma, D., Wijers, A. A., Heinze, H. J., & Mulder, G. (1999). An event-

related brain potential correlate of visual short-term memory. Neuroreport, 10(10),

2001–5.

Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in the

programming of saccades. Vision Research, 35(13), 1897–1916.

Livingstone, M. S., & Hubel, D. H. (1988). Segregation of form, color, movement, and

64

depth: anatomy, physiology, and perception. Science, 240(4853), 740–9.

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features

and conjunctions. Nature, 390(6657), 279–281.

Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word meanings can be accessed but

not reported during the attentional blink. Nature, 383(6601), 616–618.

Luria, R., Balaban, H., Awh, E., & Vogel, E. K. (2016). The contralateral delay activity

as a neural measure of visual working memory. Neuroscience & Biobehavioral

Reviews, 62, 100–8.

Luria, R., Sessa, P., Gotler, A., Jolicœur, P., & Dell’Acqua, R. (2010). Visual Short-term

Memory Capacity for Simple and Complex Objects. Journal of Cognitive

Neuroscience, 22(3), 496–512.

Luria, R., & Vogel, E. K. (2011). Shape and color conjunction stimuli are represented as

bound objects in visual working memory. Neuropsychologia, 49(6).

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory.

Nature Neuroscience, 17(3), 347–356.

Mackay, D. G. (1973). Aspects of the theory of comprehension, memory and attention.

Quarterly Journal of Experimental Psychology, 25(1), 22–40.

Maki, W. S., Frigen, K., & Paulson, K. (1997). Associative priming by targets and

distractors during rapid serial visual presentation: does word meaning survive the

attentional blink? Journal of Experimental Psychology. Human Perception and 65

Performance, 23(4), 1014–34.

Malik, P., Dessing, J. C., & Crawford, J. D. (2015). Role of early visual cortex in trans-

saccadic memory of object features. Journal of Vision, 15(11), 1–17.

Marois, R., Yi, D.-J., & Chun, M. M. (2004). The Neural Fate of Consciously Perceived

and Missed Events in the Attentional Blink. Neuron, 41(3), 465–472.

Marshall, L., & Bays, P. M. (2013). Obligatory encoding of task-irrelevant features

depletes working memory resources. Journal of Vision, 13(2), 21–21.

Mathôt, S., & Theeuwes, J. (2010). Gradual Remapping Results in Early Retinotopic and

Late Spatiotopic Inhibition of Return. Psychological Science, 21(12), 1793–1798.

Matthey, L., Bays, P. M., & Dayan, P. (2015). A probabilistic palimpsest model of visual

short-term memory. PLoS Computational Biology, 11(1), e1004003.

McCollough, A. W., Machizawa, M. G., & Vogel, E. K. (2007). Electrophysiological

measures of maintaining representations in visual working memory. Cortex; a

Journal Devoted to the Study of the Nervous System and Behavior, 43(1), 77–94.

McCollough, C. (1965). Color adaptation of edge-detectors in the human visual system.

Science, 149(3688), 1115–1116.

McConkie, G. W., & Currie, C. B. (1996). Visual stability across saccades while viewing

complex pictures. Journal of Experimental Psychology. Human Perception and

Performance, 22(3), 563–81.

66

McConkie, G. W., & Zola, D. (1979). Is visual information integrated across successive

fixations in reading? Perception & Psychophysics, 25(3), 221–224.

Medendorp, W. P. (2011). Spatial constancy mechanisms in motor control. Philosophical

Transactions of the Royal Society B: Biological Sciences, 366(1564), 476–91.

Medendorp, W. P., Goltz, H. C., Vilis, T., & Crawford, J. D. (2003). Gaze-centered

updating of visual space in human parietal cortex. The Journal of Neuroscience,

23(15), 6209–14.

Melcher, D., & Colby, C. L. (2008). Trans-saccadic perception. Trends in Cognitive

Sciences, 12(12), 466–473.

Merriam, E. P., Genovese, C. R., & Colby, C. L. (2003). Spatial updating in human

parietal cortex. Neuron, 39(2), 361–73.

Merriam, E. P., Genovese, C. R., & Colby, C. L. (2006). Remapping in human visual

cortex. Journal of Neurophysiology, 97(2), 1738–1755.

Michel, M., & Geisler, W. S. (2011). Intrinsic position uncertainty explains detection and

localization performance in peripheral vision. Journal of Vision, 11(1), 18.

Miller, G. A. (1956). The magical number seven plus or minus two: some limits on our

capacity for processing information. Psychological Review, 63(2), 81–97.

Moore, T., & Armstrong, K. M. (2003). Selective gating of visual signals by

microstimulation of frontal cortex. Nature, 421(6921), 370–373.

67

Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the

extrastriate cortex. Science, 229(4715), 782–4.

Nakamura, K., & Colby, C. L. (2002). Updating of the visual representation in monkey

striate and extrastriate cortex during saccades. Proceedings of the National Academy

of Sciences, 99(6), 4026–4031.

Niemeier, M., Crawford, J. D., & Tweed, D. B. (2003). Optimal transsaccadic integration

explains distorted spatial perception. Nature, 422(6927), 76–80.

O’Regan, J. K. (1992). Solving the "real" mysteries of visual perception: the

world as an outside memory. Canadian Journal of Psychology, 46(3), 461–88.

O’Regan, J. K., & Lévy-Schoen, A. (1983). Integrating visual information from

successive fixations: does trans-saccadic fusion exist? Vision Research, 23(8), 765–

8.

O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual

consciousness. The Behavioral and Brain Sciences, 24(5), 939-73-1031.

Oberauer, K., & Eichenberger, S. (2013). Visual working memory declines when more

features must be remembered for each object. Memory & Cognition, 41(8), 1212–

1227.

Oberauer, K., Lewandowsky, S., Farrell, S., Jarrold, C., & Greaves, M. (2012). Modeling

working memory: An interference model of complex span. Psychonomic Bulletin &

Review, 19(5), 779–819.

68

Olson, I. R., & Jiang, Y. (2002). Is visual short-term memory object based? Rejection of

the "strong-object" hypothesis. Perception & Psychophysics, 64(7),

1055–67.

Olsson, H., & Poom, L. (2005). Visual memory needs categories. Proceedings of the

National Academy of Sciences, 102(24), 8776–8780.

Oostwoud Wijdenes, L., Marshall, L., & Bays, P. M. (2015). Evidence for optimal

integration of visual feature representations across saccades. Journal of

Neuroscience, 35(28), 10146–10153.

Palmer, J. (1990). Attentional limits on the perception and memory of visual information.

Journal of Experimental Psychology: Human Perception and Performance, 16(2),

332–350.

Palmer, J., & Ames, C. T. (1992). Measuring the effect of multiple eye fixations on

memory for visual attributes. Perception & Psychophysics, 52(3), 295–306.

Pashler, H. (1988). Familiarity and visual change detection. Perception & Psychophysics,

44(4), 369–378.

Pasternak, T., & Greenlee, M. W. (2005). Working memory in primate sensory systems.

Nature Reviews Neuroscience, 6(2), 97–107.

Pertzov, Y., Dong, M. Y., Peich, M. C., & Husain, M. (2012). what was

where: The fragility of object-location binding. PLoS ONE, 7(10), e48214.

Phillips, W. A. (1974). On the distinction between sensory storage and short-term visual 69

memory. Perception & Psychophysics, 16(2), 283–290.

Pollatsek, A., Rayner, K., & Collins, W. E. (1984). Integrating pictorial information

across eye movements. Journal of Experimental Psychology: General, 113(3), 426–

42.

Polyak, S. L. (1942). The Retina. The Anatomical Record, 83(4), 597–601.

Prime, S. L., Niemeier, M., & Crawford, J. D. (2006). Transsaccadic integration of visual

features in a line intersection task. Experimental Brain Research, 169(4), 532–548.

Prime, S. L., Tsotsos, L., Keith, G. P., & Crawford, J. D. (2007). Visual memory capacity

in transsaccadic integration. Experimental Brain Research, 180(4), 609–628.

Prime, S. L., Vesia, M., & Crawford, J. D. (2008). Transcranial Magnetic Stimulation

over posterior parietal cortex disrupts transsaccadic memory of multiple objects.

Journal of Neuroscience, 28(27), 6938–6949.

Prime, S. L., Vesia, M., & Crawford, J. D. (2010). TMS over human frontal eye fields

disrupts trans-saccadic memory of multiple objects. Cerebral Cortex, 20(4), 759–

772.

Prime, S. L., Vesia, M., & Crawford, J. D. (2011). Cortical mechanisms for trans-

saccadic memory and integration of multiple object features. Philosophical

Transactions of the Royal Society B: Biological Sciences, 366(1564), 540–53.

Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual

processing in an RSVP task: an attentional blink? . Journal of Experimental 70

Psychology. Human Perception and Performance, 18(3), 849–60.

Rayner, K., & McConkie, G. W. (1976). What guides a reader’s eye movements? Vision

Research, 16(8), 829–37.

Rayner, K., & Pollatsek, A. (1983). Is visual information integrated across saccades?

Perception & Psychophysics, 34(1), 39–48.

Reeves, A., & Sperling, G. (1986). Attention gating in short-term visual memory.

Psychological Review, 93(2), 180–206.

Rensink, R. A., O’Regan, J. K., & Clark, J. J. (1997). To See or not to See: The Need for

Attention to Perceive Changes in Scenes. Psychological Science, 8(5), 368–373.

Reynolds, J. H., & Desimone, R. (1999). The role of neural mechanisms of attention in

solving the binding problem. Neuron, 24(1), 19–29, 111–25.

Robertson, L. C. (2003). Binding, spatial attention and perceptual awareness. Nature

Reviews Neuroscience, 4(2), 93–102.

Saaty, T. L., & Ozdemir, M. S. (2003). Why the magic number seven plus or minus two.

Mathematical and Computer Modelling, 38(3–4), 233–244.

Schmidt, B. K., Vogel, E. K., Woodman, G. F., & Luck, S. J. (2002). Voluntary and

automatic attentional control of visual working memory, 64(5).

Schneegans, S., & Bays, P. M. (2016). No fixed item limit in visuospatial working

memory. Cortex, 83, 181–193.

71

Schneegans, S., & Bays, P. M. (2017). Neural architecture for feature binding in visual

working memory. The Journal of Neuroscience, 37(14), 3913–3925.

Schotter, E. R., Angele, B., & Rayner, K. (2012). Parafoveal processing in reading.

Attention, Perception, & Psychophysics, 74(1), 5–35.

Serences, J. T., Ester, E. F., Vogel, E. K., & Awh, E. (2009). Stimulus-Specific Delay

Activity in Human Primary Visual Cortex. Psychological Science, 20(2), 207–214.

Serences, J. T., & Yantis, S. (2006, January). Selective visual attention and perceptual

coherence. Trends in Cognitive Sciences. Elsevier.

Shao, N., Li, J., Shui, R., Zheng, X., Lu, J., & Shen, M. (2010). Saccades elicit obligatory

allocation of visual working memory. Memory & Cognition, 38(5), 629–640.

Shapiro, K. L., Caldwell, J., & Sorensen, R. E. (1997). Personal names and the attentional

blink: a visual "cocktail party" effect. Journal of Experimental

Psychology. Human Perception and Performance, 23(2), 504–14.

Shen, K., & Pare, M. (2014). Predictive Saccade Target Selection in Superior Colliculus

during Visual Search. Journal of Neuroscience, 34(16), 5640–5648.

Sherrington, C. S. (1918). Observations on the sensual role of the proprioceptive nerve-

supply of the extrinsic ocular muscles. Brain, 41(3–4), 332–343.

Sheth, B. R., & Shimojo, S. (2001). Compression of space in visual memory. Vision

Research, 41(3), 329–341.

72

Skavenski, A. A., Haddad, G., & Steinman, R. M. (1972). The extraretinal signal for the

visual perception of direction. Perception & Psychophysics, 11(4), 287–290.

Slattery, T. J., Angele, B., & Rayner, K. (2011). Eye movements and display change

detection during reading. Journal of Experimental Psychology: Human Perception

and Performance, 37(6), 1924–1938.

Smith, E. E., Jonides, J., Koeppe, R. A., Awh, E., Schumacher, E. H., & Minoshima, S.

(1995). Spatial versus Object Working Memory: PET Investigations. Journal of

Cognitive Neuroscience, 7(3), 337–356.

Sommer, M. A., & Wurtz, R. H. (2002). A Pathway in Primate Brain for Internal

Monitoring of Movements. Science, 296(5572), 1480–1482.

Sommer, M. A., & Wurtz, R. H. (2004). What the Brain Stem Tells the Frontal Cortex. II.

Role of the SC-MD-FEF Pathway in Corollary Discharge. Journal of

Neurophysiology, 91(3), 1403–1423.

Sommer, M. A., & Wurtz, R. H. (2006). Influence of the thalamus on spatial visual

processing in frontal cortex. Nature, 444(7117), 374–377.

Sperling, G. (1960). The information available in brief visual presentations.

Psychological Monographs: General and Applied, 74(11), 1–29.

Subramanian, J., & Colby, C. L. (2014). Shape selectivity and remapping in dorsal stream

visual area LIP. Journal of Neurophysiology, 111(3), 613–27.

Swan, G., & Wyble, B. (2014). The binding pool: A model of shared neural resources for 73

distinct items in visual working memory. Attention, Perception, & Psychophysics,

76(7), 2136–2157.

Tanaka, L. L., Dessing, J. C., Malik, P., Prime, S. L., & Crawford, J. D. (2014). The

effects of TMS over dorsolateral prefrontal cortex on trans-saccadic memory of

multiple objects. Neuropsychologia, 63, 185–193.

Tas, A. C., Luck, S. J., & Hollingworth, A. (2016). The relationship between visual

attention and visual working memory encoding: a dissociation between covert and

overt orienting. Journal of Experimental Psychology: Human Perception and

Performance, 42(8), 1121–1138.

Theeuwes, J., Belopolsky, A. V., & Olivers, C. N. L. (2009). Interactions between

working memory, attention and eye movements. Acta Psychologica, 132(2), 106–

114.

Theeuwes, J., Kramer, A. F., & Irwin, D. E. (2011). Attention on our mind: The role of

spatial attention in visual working memory. Acta Psychologica, 137(2), 248–251.

Treisman, A. M. (1977). Focused attention in the perception and retrieval of

multidimensional stimuli. Perception & Psychophysics, 22(1), 1–11.

Treisman, A. M. (1996). The binding problem. Current Opinion in Neurobiology, 6(2),

171–8.

Treisman, A. M. (1998). Feature binding, attention and object perception. Philosophical

Transactions of the Royal Society B: Biological Sciences, 353(1373).

74

Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention.

Cognitive Psychology, 12(1), 97–136.

Treisman, A. M., & Schmidt, H. (1982). Illusory conjunctions in the perception of

objects, 14(1).

Umeno, M. M., & Goldberg, M. E. (2001). Spatial processing in the monkey frontal eye

field. II. Memory responses. Journal of Neurophysiology, 86(5), 2344–52. van den Berg, R., Shin, H., Chou, W.-C., George, R., & Ma, W. J. (2012). Variability in

encoding precision accounts for visual short-term memory limitations. Proceedings

of the National Academy of Sciences of the United States of America, 109(22),

8780–5. van der Heijden, A. H. C., van der Geest, J. N., de Leeuw, F., Krikke, K., & Müsseler, J.

(1999). Sources of position-perception error for small isolated targets. Psychological

Research, 62(1), 20–35.

Verfaillie, K. (1997). Transsaccadic memory for the egocentric and allocentric position of

a biological-motion walker. Journal of Experimental Psychology: Learning,

Memory, and Cognition, 23(3), 739–760.

Vogel, E. K., Luck, S. J., & Shapiro, K. L. (1998). Electrophysiological evidence for a

postperceptual locus of suppression during the attentional blink. Journal of

Experimental Psychology. Human Perception and Performance, 24(6), 1656–74.

Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of features, conjunctions

75

and objects in visual working memory. Journal of Experimental Psychology. Human

Perception and Performance, 27(1), 92–114.

Walker, M. F., Fitzgibbon, E. J., & Goldberg, M. E. (1995). Neurons in the monkey

superior colliculus predict the visual result of impending saccadic eye movements.

Journal of Neurophysiology, 73(5), 1988–2003.

Wandell, B. A. (1995). Foundations of vision. Sinauer Associates.

Wang, X.-J. (2001). Synaptic reverberation underlying persistent activity.

Trends in Neurosciences, 24(8), 455–463.

Weichselgartner, E., & Sperling, G. (1987). Dynamics of automatic and controlled visual

attention. Science, 238(4828), 778–80.

Wheeler, M. E., & Treisman, A. M. (2002). Binding in short-term visual memory.

Journal of Experimental Psychology: General, 131(1), 48–64.

Wilken, P., & Ma, W. J. (2004). A detection theory account of change detection. Journal

of Vision, 4(12), 11.

Wittenberg, M., Bremmer, F., & Wachtler, T. (2008). Perceptual evidence for saccadic

updating of color stimuli. Journal of Vision, 8(14), 9–9.

Wolf, C., & Schütz, A. C. (2015). Trans-saccadic integration of peripheral and foveal

feature information is close to optimal. Journal of Vision, 15(16), 1.

Wolfe, J. M., & Cave, K. R. (1999, September). The psychophysical evidence for a

76

binding problem in human vision. Neuron.

Wurtz, R. H. (2008). Neuronal mechanisms of visual stability. Vision Research, 48(20),

2070–89.

Xu, Y. (2002). Limitations of object-based feature encoding in visual short-term memory.

Journal of Experimental Psychology. Human Perception and Performance, 28(2),

458–68.

Zhang, W., & Luck, S. J. (2008). Discrete fixed-resolution representations in visual

working memory. Nature, 453(7192), 233–235.

Zokaei, N., Gorgoraptis, N., Bahrami, B., Bays, P. M., & Husain, M. (2011). Precision of

working memory for visual motion sequences and transparent motion surfaces.

Journal of Vision, 11(14), 2–2.

77