TRACKING THE CLOSED BY CALIBRATING ELECTROOCULOGRAPHY WITH PUPIL-CORNEAL REFLECTION

by

Raymond R. MacNeil

B.Sc., University of Toronto, 2016

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF ARTS

in

THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES

(Psychology)

THE UNIVERSITY OF BRITISH COLUMBIA

(Vancouver)

August 2020

© Raymond R. MacNeil, 2020

The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the thesis entitled:

TRACKING THE CLOSED EYE BY CALIBRATING ELECTROOCULOGRAPHY WITH PUPIL-CORNEAL REFLECTION

submitted by Raymond R. MacNeil in partial fulfillment of the requirements for the degree of Master of Arts in Psychology

Examining Committee:

Dr. James T. Enns (Psychology, Faculty of Arts) Supervisor

Dr. Ipek Oruc ( and Visual Sciences, Faculty of Medicine) Supervisory Committee Member

Dr. Peter Graf (Department of Psychology, Faculty of Arts) Supervisory Committee Member

ii

Abstract

Electrooculography (EOG) offers several advantages over other methods for tracking movements, including its low cost and capability of monitoring gaze position when the are closed. Yet, EOG poses its own challenges, because in order to determine saccadic distance and direction, the electrical potentials measured by EOG must be calibrated in some way with physical distance. Moreover, the EOG signal is highly susceptible to noise and artifacts arising from a variety of sources (e.g., activity of the ). Here we describe a method for estimating a corrected EOG signal by simultaneously tracking gaze position with an industry standard pupil-corneal reflection (PCR) system. We first compared the two measurements with the open under two conditions of full illumination and in a third condition of complete darkness. Compared to the PCR signal, the EOG signal was less precise and tended to overestimate saccadic amplitude. We harnessed the relation between the two signals in the dark condition in order to estimate a corrected EOG-based metric of saccade end-point amplitude in a fourth condition, where the participants eyes were closed. We propose that these methods and results can be applied to human-machine interfaces that rely on EOG eye tracking, and for advancing research in sleep, visual imagery, and other situations in which participants’ eyes are moving but closed.

iii

Lay Summary

Homo sapiens are insatiable infovores, making eye movements almost constantly to take in new information. Most research has focused on the activity of the eyes while the eyelids are open. Comparatively little is known about eye movements when the eyelids are closed — as when we dream, meditate, or imagine — because the current tools for doing so are limited. This thesis describes an approach to measuring closed-eye movements that combines an older eye tacking technique with a modern one in order to improve closed-eye tracking accuracy.

The results show that this technique holds promise for improved use of closed-eye movements in the of control human-machine interfaces and for research on spatial cognition.

iv

Preface

I, Raymond MacNeil, am the primary author of the work presented within this thesis. I was responsible for writing the manuscript, analyzing the data, and interpreting the results. One section of this thesis describes an algorithm used to process data collected with a pupil-corneal reflection eye tracker. I alone wrote the code and developed the concepts that underpin this algorithm. Dr. James T. Enns was the supervisory author and provided essential feedback and guidance throughout the entire research process. He also made substantial contributions to editing the manuscript throughout its development. The key idea underpinning this work is best attributed to Dr. Enns. Both Dr. Enns and I designed the experiment and worked out the finer details of its implementation. P.D.S.H Gunawardane, who is co-supervised by Drs. Mu Chiao

(Microelectromechanical Systems Laboratory) and Clarence W. de Silva (Industrial Automation

Laboratory), oversaw the technical aspects of the electrooculography recordings. Jamie Dunkle

(Research Technician, UBC Vision Lab) wrote the software for the experiment and provided technical assistance throughout the course of the study. P.D.S.H Gundarwane, Leo Zhao, Jamie

Dunkle, and I each contributed to writing the code/algorithms that were used to process the EOG data. Figure 4 was adapted from work performed by Jamie Dunkle.

All research that is described here was conducted in adherence to Canada’s Tri-Council

Policy Statement: Ethical Conduct for Research Involving Humans (2nd Ed.) and received approval from The University of British Columbia’s behavioural research ethics board (H18-

03792).

In the course of my Master’s program I received funding from the Natural Sciences and

Engineering Research Council (CGS-M). I also received internal funding from UBC in the form of a Faculty of Arts Graduate Award.

v

Table of Contents

ABSTRACT...... iii

LAY SUMMARY ...... iv

PREFACE ...... v

TABLE OF CONTENTS ...... vi

LIST OF TABLES ...... viii

LIST OF FIGURES ...... ix

ACKNOWLEDGEMENTS ...... x

DEDICATION...... xi

INTRODUCTION ...... 1

EOG HISTORY AND BACKGROUND ...... 1

OTHER EYE TRACKING SYSTEMS ...... 3

Scleral Search Coil Technique...... 3

Infrared-Based Pupil-Corneal Reflection Eye Tracking...... 5

UNIQUE STRENGTHS OF EOG ...... 6

STUDY OVERVIEW...... 7

Research Aim One...... 7

Research Aim Two ...... 8

Research Aim Three ...... 9

METHOD ...... 11

PARTICIPANTS ...... 11

MATERIALS AND APPARATUS ...... 11

PROCEDURE ...... 13

vi

Preparation...... 13

EyeLink Setup & Calibration ...... 15

EOG Calibration & Training Protocol ...... 16

Task Conditions ...... 17

EYELINK DATA PREPROCESSING ...... 18

COMPLEX-AND-MULTIEVENT GAZE PATTERN PARSER ...... 21

EOG DATA PROCESSING ...... 25

DATA ANALYSIS ...... 27

EOG Correction Models ...... 27

Comparing EyeLink, and the Corrected and Uncorrected EOG Outcomes ...... 28

RESULTS ...... 30

CORRECTED EOG SIGNAL ...... 30

Overview...... 30

Corrected Darkness EOG...... 31

Corrected Eyes-Closed EOG ...... 33

ACCURACY (ERROR) OF LANDING POINTS ACROSS CONDITIONS ...... 34

DISCUSSION ...... 37

AIM ONE: CALIBRATING EOG MEASURES WITH SIMULTANEOUS PCR RECORDINGS ...... 37

AIM TWO: UNDERSTANDING THE ROLE OF VISUAL FEEDBACK FOR EYE MOVEMENTS ...... 41

AIM THREE: AN ALGORITHM FOR DETERMINING TARGET ACQUISITION ...... 45

TABLES ...... 47

FIGURES ...... 51

REFERENCES ...... 59

vii

List of Tables

Table 1. Condition Parameters ...... 47

Table 2. Example of Event Data Provided by the EyeLink’s Event Parser...... 48

Table 3. EyeLink Calibration Data ...... 49

Table 4. Summary of Regression Analyses to Derive EOG Correction Factors ...... 50

viii

List of Figures

Figure 1. Spatial Layout of Stimulus Display ...... 51

Figure 2. Representative Traces of Gaze Position for EyeLink Data ...... 52

Figure 3. Supervised Nature of the Gaze Landing Point Classification Algorithm ...... 53

Figure 4. EOG Signal Processing and Feature Identification ...... 54

Figure 5. Comparison of EyeLink, EOG, and EOG Corrected ...... 55

Figure 6. Validation Step ...... 56

Figure 7. Horizontal Landing Error Across Conditions ...... 57

Figure 8. EyeLink Measured Vertical Landing Error Across Conditions ...... 58

ix

Acknowledgements

I wish to express my deepest gratitude to my supervisor, Dr. James T. Enns (Jim). Jim is a man of the highest caliber, and he has been an extremely positive force in my life. His support in my academic and professional development has been unwavering, and I am indebted.

I further would like to extend my sincerest appreciation to committee members, Drs. Ipek

Oruc and Peter Graf for the valuable critique and insights they provided. I further wish to thank my lab mates for the helpful feedback they have offered throughout the course of this project and my time in the MA program in general. Special mention is due for Dr. Rob Whitwell who has always been there to offer his incredibly detailed advice and guidance. A second special mention is due for Jamie Dunkle – our lab manager and technician. Jamie’s technical aptitude, versatility, and brilliance has inspired me over the last two years to develop and refine my own technical repertoire: from coding, graphic design, to building electronic circuits.

Additionally, I want to extend my genuine gratitude to my collaborators from the

Microelectromechanical Systems and Industrial Automation Laboratories (Mechanical

Engineering): Hiroshan Gunawardane, Leo Zhao, Dr. Mu Chiao, and Dr. Clarence W. de Silva. I look forward to continuing our fruitful collaboration. I am further grateful to my dedicated research assistants: Shay Zhang, Noor Brar, and Edward Lin.

I also wish to thank my family: my brothers Will and James, as well as my parents, Rick and Ursula. Finally, I extend my everlasting thanks to my dearest friend, Carol Krause, without whom, none of this would be possible.

x

Dedication

For Carol

xi

Introduction

Humans change their eye position several times a second under everyday viewing conditions, and there are several technological means of tracking these changes. However, humans also change their eye position when their eyes are closed, as when they are sleeping, meditating, and engaging in visuospatial imagination. Under these circumstances, tracking gaze position is considerably more challenging. The main goal of this research is to devise a reliable method of tracking eye gaze position when a study participant’s eyes are closed.

The introduction for this research is organized in three sections. We first provide a brief overview of electrooculography (EOG), including its advantages and disadvantages. A second section compares EOG with other eye tracking methods, namely the scleral search coil technique

(SSCT) and the infrared-based pupil-corneal reflection (PCR) signal. This comparison shows that despite the availability of more reliable and accurate eye tracking techniques, EOG remains an important methodology for measuring gaze position. Further, this comparison demonstrates

EOG has unique properties which oftentimes make it the ideal method of eye tracking for applications to (1) the development of novel, gaze-based human-machine interfaces, and (2) research that could benefit from the measurement of eye position when the lids are closed. In a third section, we introduce the specific research aims and methods undertaken in the present study.

EOG History and Background

The presence of a bioelectrical dipole within the human eye was first described by Du

Bois-Reymond (1884) in his seminal treatise, Untersuchungen Über Thierische Electricität.

EOG measurements reflect the standing potential of this dipole, which specifically exists between the and the ocular fundus—often referred to as the corneo-retinal potential. The

1 magnitude of this potential difference is usually between 0.4–1.0 mVs, with the current travelling away from the negatively charged fundus and toward the positively charged cornea (Barbara et al., 2020). This dipole has generally been thought to exist as a consequence of greater metabolic activity within the retina relative to the anterior structures of the eye (Hutton, 2019). Subsequent work by R. Jung (as cited by Heide et al., 1999), Fenn and Hursh (1936), and E. Marg (1951) during the early to mid 1900’s eventually culminated in the development of EOG as means to estimate the angular displacement of the eye from a default position. Because the change in the corneo-retinal potential is linearly proportional to the angular displacement of the eye from the primary position, this biosignal can be leveraged to estimate the position of eye gaze when the head remains stationary. This linear relationship is approximately maintained within the range of

± 30° and ± 20° for horizontal and vertical eye rotation, respectively (Majaranta & Bulling,

2014). The resulting signal—that is, the measured changes in bioelectrical potential corresponding with movement of the eyes over time—is known as the electrooculogram.

The spatial resolution of an eye tracking device can be thought of in terms of the smallest eye movement it is capable of detecting. In the case of EOG, this is approximately 1.0°, though this contingent on the recording setup, procedure, hardware, and quality of instructions provided to the subject, to name some examples. EOG can achieve very high temporal resolution, but is limited by the particular signal acquisition system being used. The temporal resolution of a data acquisition device refers to interval of time that must pass before a new data sample can be registered. This is often expressed in terms of the instrument’s sampling rate, which refers to number of unique data points that can be collected in one second.

Since its potential clinical utility was first demonstrated by Arden et al. (1962), EOG has proven to be an invaluable tool in the assessment of retinal integrity and the diagnosis of certain

2 eye diseases. EOG has also contributed to major discoveries relating to the oculomotor system, such as the rapid eye movements (REMs) that occur during certain phases of sleep (Aserinsky &

Kleitman, 1953). Its primary advantage over other eye tracking systems (reviewed in the next section), is that EOG is capable of recording eye movements for extended periods of time and in a variety of different environments. This currently makes it the most practical means of studying

REM sleep (Aserinsky, 1971), associated dream imagery (Hong et al., 1997), and visuospatial imagery during waking states when the eyes are closed (Antrobus et al., 1964), to name a few examples.

Other Eye Tracking Systems

This section is not intended to exhaustively review the many existing (or once existing) eye tracking methodologies. The interested reader who wishes to learn about these other systems is referred to excellent reviews by Duchowski (2017), Hutton (2019), and Wade (2015). The discussion is limited to SSCT and infrared-based PCR eye tracking because they are directly related to our goal of tracking eyes when they are closed. We discuss the properties of SSCT because it is the only other system that can track the position of the eye during lid closure.

Further, SSCT is compared to EOG in order to demonstrate that despite having properties that in some respects make it superior to EOG, there are various reasons that often preclude it as a viable option for tracking the eyes. We discuss the properties of infrared-based PCR eye tracking and how it compares to EOG because it is currently the most popular method of eye tracking used in human perception laboratories. This makes it a practical anchor point for the comparison to and calibration of EOG.

Scleral Search Coil Technique. The magnetic SSCT, introduced in the early 1960s

(Robinson, 1963), is often hailed as the ‘gold standard’ of eye tracking techniques. In this

3 system, a silicone annulus containing a copper coil is mounted onto the of the participant’s eyeball, usually after administering a topical anesthetic. The participant is subsequently placed within a magnetic field. As the participant’s eye rotates, it shifts the position of the coil, thereby inducing a measurable change in the coil’s electric charge (Hutton, 2019). SSCT is able to attain levels of accuracy and spatial resolution on par with modern PCR systems (Imai et al., 2005), which measure the position of gaze based on the vector defining the angle between the reflections of the cornea and pupil. These reflections are captured by a high-speed video camera and subsequently rendered by complex image processing algorithms. We discuss this system in greater detail in the subsequent section.

Like EOG, SSCT permits the measurement of eye movements during lid closure. Another one of its benefits is that it can measure horizontal (x), vertical (y), and torsional (z) eye movements—a capability that EOG lacks. Unfortunately, the invasive nature of SSCT and the physical space required to house the hardware that powers it highly constrains its use. There are also questions about the generality of the data acquired from SSCT, since it appears to alter the kinematics of eye movements (Frens & Van der Geest, 2002).

For the participant, there are other drawbacks. To begin, topical anesthesia must usually be administered, and volunteers report moderate to high discomfort during wear, even with anesthesia (Irving et al., 2003). These authors also reported adverse effects such as diminished visual acuity, buckling of the iris, hyperemia of the conjunctiva, and corneal staining. Some of these effects, including reduced visual acuity, appeared in as little as fifteen minutes after insertion of the lens and coil. Importantly, all effects were transient and eventually dissipated after removal of the coil. Irving et al.’s finding regarding eye irritation was (and remains) no surprise to vision researchers. Ocular discomfort tends to increase over the time that the lens and

4 coil system remains mounted on the eye. As a consequence, the use of SSCT is typically limited to thirty minutes or less.

All things considered, SSCT has excellent properties, including high temporal and spatial resolution, the ability to record with eyelids closed, and the ability to measure the 3D position of the eye, which is not possible with EOG. However, the invasive nature of SSCT places considerable constraints on its use, both in the context of research and for practical applications

(e.g., human-machine interfaces). There are also research environments in which its use would be impossible, such as inside an MRI scanner, or highly impractical, as in the context of field studies within remote environments (e.g., MacNeil et al., 2016; Mogilever et al., 2018).

Infrared-Based Pupil-Corneal Reflection Eye Tracking. The growth of inexpensive digital technology during the 1980s coincided with a relative decline in the popularity of EOG.

In the 1990s infrared-based PCR systems became more accurate and economical (Holmqvist &

Andersson, 2017). Their spatial and temporal resolution became vastly superior to that provided by EOG, even when using the best of biosignal acquisition devices. Modern PCR eye tracking systems, such as the EyeLink 1000® (SR Research Ltd., Ottawa, ON) used in the work described here, can attain a spatial resolution as high as 0.01° and have typical accuracy levels of 0.25-

0.50° (SR Research Ltd., 2009). The temporal resolution of these systems is also very high, and sampling rates up to 1000-2000 Hz are typical of modern PCR trackers. PCR eye trackers vary in the number of dimensions of movement that are recorded and in the extent to which offline processing is required to obtain it.

Infrared-based PCR systems are also user-friendly in that they are usually bundled with high performance software that allows for online event detection and the design of gaze contingent displays. A serious disadvantage of PCR trackers is that the quality of the recording is

5 impaired when anything obstructs the camera’s view of the eye. This includes the lenses, frames, and reflective coating found on many eyeglasses worn by participants, eyelids that fold over the cornea, and long eyelashes. Most pertinent to our present goal is that infrared-based PCR systems lack, by their very nature, the ability to record eye movements when the eyes are closed.

Unique Strengths of EOG

Considering that SSCT systems are too impractical and invasive for widespread use and that

PCR systems are unable to record eye movements during lid closure, this leaves EOG as the best available tool for measuring closed-eye movements in a variety of settings. The recent emergence of low-cost and portable EOG systems being bundled with “do-it-yourself” brain- computer interfaces suggests they also merit consideration for open-eye tracking. Their unique combination of portability and increasingly low cost suggest they may soon be in widespread use. As such, it will be important to be able to characterize their accuracy and precision for open- eye applications as well.

One example of the type of application that is leading to renewed interest in EOG concerns the design of functional and safe human-machine interfaces (Bulling et al., 2011). For instance, several research groups are working hard to develop an EOG-based wheelchair guidance system (for a review, see Barea Navarro et al., 2018). This has potential benefits for individuals with neurological or musculoskeletal disorders that severely limit their mobility. One example of a human-computer interface that leverages both EOG’s portability and its capacity to measure closed-eye movements can be seen in Findling et al.’s (2019) prototype of a closed-eye, gaze-based password entry system for mobile devices.

Renewed interest in EOG among cognitive neuroscientists is evidenced by studies of the default brain network (i.e. the resting state). These studies have shown that when participants lie

6 in complete darkness, having their eyes open versus closed results in distinctive patterns of brain activation (Hüfner et al., 2008; Marx et al., 2003; Yang et al., 2007). These two different patterns of brain activation are referred to by some researchers as the ‘exteroceptive’ (eyes open) and

‘interoceptive’ (eyes closed) states, respectively (Hüfner et al., 2008, 2009; Marx et al., 2003;

Wei et al., 2018). The exteroceptive state is associated with an increased blood oxygenation level dependent (BOLD) signal in attentional and oculomotor systems, including the frontal eye fields, thalamus, supplementary motor areas, cerebellum, and dorsolateral prefrontal cortex. On the other hand, the interoceptive state is associated with an increased BOLD response in multisensory systems and areas associated with dreaming and imagery, including the somatosensory cortex, orbitofrontal cortices, and the medial occipital gyri (Marx et al., 2004;

Wei et al., 2018).

Establishing a ‘resting state’ baseline condition is a critical methodologic step in functional neuroimaging studies. The interpretation of the results from the experimental conditions depends in large part on a comparison with the activity observed during this baseline state. Marx et al. (2004) demonstrated that the results of a simple visual task performed in the scanner were quite different, depending on whether the baseline condition was exteroceptive

(eyes open) or interoceptive (eyes closed). Because EOG is suitable for use within an MRI scanner, researchers can leverage this tool to study how eye movements may be different under exteroceptive and interoceptive conditions, and to what extent these eye movement differences may be contribute to the distinctive brain activation patterns in each.

Study Overview

Research Aim One. The aims of this investigation were threefold. The first aim was to develop a method for improving the measurement of eyes-closed gaze position by pairing EOG

7 with infrared-based PCR eye tracking. By simultaneously measuring open eye movements using

EOG and an industry-standard optical eye tracking device, we were able to anchor the inherently noisier and artifact-prone EOG signal to that of the more reliable and spatially accurate PCR signal. This, in turn, allowed us to derive a conversion function by which we could estimate a corrected EOG signal corresponding to the eyes-closed recordings. A secondary benefit of doing this is that we provide the eye tracking community with calibrated information on the quality and reliability of EOG recordings they can expect to obtain using a popular, portable, and relatively inexpensive biosignal acquisition device—OpenBCI’s Cyton Biosensing Board. We identified two previous reports within the literature that described the combined use of EOG and PCR optical eye tracking. In these studies, the purpose for the concurrent use of EOG with optical eye tracking was to either validate EOG measurements of saccadic reaction time (McIntosh &

Buonocore, 2012) or to, “obtain non-drifting [DC signal] position data of opened eyes (Sprenger et al., 2010, p. 46).” This latter study similarly compared gaze position data available from infrared-based PCR recordings in eyes-opened conditions (visually- and memory-guided) to that of electrooculograms recorded in a eyes-closed condition. However, the comparison is limited to a brief pictorial summary of the data from a single subject and, apparently, only a handful of trials. Moreover, the report provides no details regarding how synchronization between the two tracking devices was achieved, which is a critical methodological detail. In sum, we are unaware of any previous study that has paired the EOG and PCR signals with the explicit goal of deriving a refined estimate of the EOG signal recorded during closure. The results of this research endeavour are described in the section entitled, “Model-Corrected EOG Signal.”

Research Aim Two. The second aim of this work was to characterize voluntary oculomotor performance while systematically varying the availability of visual feedback.

8 Specifically, we were interested in further understanding the level of accuracy at which human subjects could reproduce eye movements to remembered target locations while in darkness and with their eyelids closed. While this question had been explored previously, we were surprised to learn that the reported results are primarily descriptive in nature and reflect datasets comprising only a handful of participants (Allik et al., 1981; Becker & Fuchs, 1969). Shaikh and colleagues

(2010) presented data that could potentially speak further to this question, but their analysis was limited to how eyelid closure influenced the trajectory of eye movements rather than their accuracy per se. To shed further light on this matter, and we compared the accuracy of eye movements made to predesignated target locations under four conditions: (1) with visible target markers in a fully illuminated room; (2) with no visible target markers in a fully illuminated room; (3) in complete darkness; and (4) in complete darkness and with the eyelids closed. Target locations were positioned across the horizontal meridian at ±10.5° and ±21.0° to capture a reasonable range capable of being measured via EOG, but at the same time avoiding their placement near the edge of display, which could conceivably be used as a landmark in condition two (visual markers absent). The results of this aim are described in a section entitled “Accuracy

(Error) of Landing Points Across Conditions.”

Research Aim Three. Our third research aim only became clear once we began work on the two aims described above. In order to measure the accuracy of voluntary shifts in gaze, we needed a methodology to determine when participants believed they had acquired a target position. Research-grade, high-accuracy optical eye tracking hardware (e.g., manufactured by the likes of SensoriMotoric Instruments and SR Research Ltd.) are usually bundled with software that allows for the online detection of fixations, saccades, and blink events. While such event data was very useful for our purpose, they only solved half of the puzzle. To determine the target

9 landing point, we had to parse these events in term of event sequences and patterns, not individual fixations, saccades, and blinks.

Complicating matters, in the process of visually inspecting our data, we found that participant’s eye movements did not always conform with the expected behavioural sequence of events. For example, there were a number of trials where a participant would first execute an apparent anticipatory gaze sweep, which could either be congruent or incongruent with the target location. Accordingly, we refer to these occurrences as congruent or incongruent “false starts”, respectively. The term false start reflects a description of this phenomenon in terms of the computational problem it posed for the algorithm, which was to differentiate saccade-fixation event sequences associated with these apparent pre-target gaze sweeps from secondary – or in some cases, tertiary – event sequences that had a higher probability of reflecting the participant’s true attempt to acquire the target. This algorithm is described in the section entitled, “Complex- and-Multievent Gaze Pattern Parser.”

10 Method

Participants

Seventeen adults (12 female) attending the University of British Columbia participated in the experiment at the UBC Vision Lab during January and February of 2020. The first author (RRM) was among them. Participants were aged 18-29 years (M = 21.39, SD = 2.98) and each reported having normal or corrected-to-normal vision. All participants provided written informed consent, and with the exception of RRM, received partial course credit in compensation for their time.

Our recruitment advertisement indicated that eligible participants were required to have normal or corrected (20/20) vision, as well as normal function of their eye muscles. No specific exclusion criteria were indicated. However, we ultimately excluded two participants from the analysis and results: one participant failed to meet calibration requirements for PCR eye tacking; another reported having Leber’s hereditary optic neuropathy, a congenital eye disease which is known to be associated with oculomotor abnormalities (Irving et al., 2003; Murphy, 2001;

Vincent et al., 2017). All research was conducted in accordance with Canada’s Tri-Council

Policy Statement: Ethical Conduct for Research Involving Humans (2nd Ed.), and was approved by the University of British Columbia’s behavioural research ethics board (H18-03792).

Materials and Apparatus

Stimulus presentation was controlled using custom software developed using the

PsychoPy3 package (Peirce et al., 2019; Peirce & MacAskill, 2018). Visual stimuli were presented on a 121.9- 71.1-cm LCD monitor (resolution = 1,920 1,080 pixels; refresh rate

= 30 Hz) at a viewing distance of 53-cm. In the visually-guided saccade tasks, one central and four eccentric fixation disks (outer diameter = 0.55º; inner diameter = 0.15º) were presented on a grey backdrop across the horizontal meridian. The outer annulus of each location marker was

11 coloured black, while the enclosed portion was coloured white. The layout of the visual display viewed by participants is shown in Figure 1. The visual marker presented in the centre of the display was the default resting location for the participant’s fixation between trials. The four markers labeled A-D indicated the potential locations to which the participant was required to direct their eye gaze on each trial. On a given trial, an auditory stimulus of either ‘A’, ‘B’, ‘C’, or

‘D’ signaled to the participant that they were to execute a horizontal eye movement from the central resting point to the −21.0°, −10.5°, 10.5°, or 21.0° marker location, respectively.

Hereafter, we refer to these as marker locations A–D.

In three of the four conditions of the experiment (participant’s eyelids open), eye movements were simultaneously recorded using an EyeLink® 1000 (SR Research Ltd., Ottawa,

CA) infrared-based PCR eye tracker and EOG. The EyeLink system was configured in the tower-mount mode, meaning the infrared camera was situated atop a chin-and-forehead-rest used to stabilize the participant’s head. The EyeLink was equipped with a 910-nm infrared illuminator, thereby eliminating light from the visible spectrum that would otherwise be visible in low ambient lighting when using the EyeLink’s standard issue 890-nm illuminator. The EOG signal was acquired using a Cyton Biosensing Board (OpenBCI, New York City, USA) and

Skintact F310 (Leonhard Lang, FL, USA) Ag/AgCl solid gel electrodes. The EOG signal was wirelessly transmitted to the display PC via Bluetooth. Both the EyeLink and Cyton board recorded monocularly from the right eye at a sampling rate of 250 Hz. Temporal synchronization of the tracking devices and the display PC was achieved using Lab Streaming Layer. In a fourth condition where the participant’s eyes were closed, as described in the procedure that follows, eye position was recorded using only EOG.

12 Procedure

Preparation. The participant was first administered an informed consent procedure, which included a brief demographics and eye health questionnaire. If the participant was wearing mascara or eyeshadow, the experimenter politely asked if they would mind removing it for the purpose of the experiment. All participants asked of this complied. Following consent and demographics, the participant was prepared for the EOG recording. The experimenter first cleansed the electrode attachment sites on the skin using 70% isopropyl alcohol. This a standard procedure in EOG recordings, and is done to minimize electrical impedance between the skin and electrodes. A total of five electrodes were attached to the participant. In order to record horizontal eye movements, one was attached to the outer canthus of the right eye, and a second

(reference electrode) was attached to the base of the forehead. A third electrode was attached directly above the right eyebrow, a fourth electrode was attached below the eye, and a fifth reference electrode was placed on the middle of the forehead, to record vertical eye movements.

The five wire leads were directed backward over the top of the head to prevent them from obscuring the participant’s view. The experimenter applied strips of medical adhesive tape to secure the leads in place.

After the electrodes were connected and the connection between the Cyton Board and

Lab Streaming Layer software was established, the experimenter verified that the EOG signal was being captured. The experimenter accomplished this by monitoring the EOG signal’s real- time output on a computer monitor and asking the participant to perform eyeblinks and/or saccades. To the trained eye, these behaviours give rise to identifiable and fairly unambiguous features in the EOG waveform. If the signal was absent or irregular, the experimenter followed a troubleshooting protocol to ensure the setup was properly configured.

13 Once a reliable EOG signal was attained, the experimenter went on to provide the participant with verbal instructions regarding the experiment and tasks that they would be performing. First, the experimenter provided a general description of the experiment, explaining to the participant that they would be required to make eye movements to different locations about the display, and that they would be doing this under different conditions—namely, conditions that varied in whether these eye movements would be performed while visual markers were present or absent, the room was illuminated or in darkness, and the eyelids open or closed.

Following this overview, the experimenter loaded a PowerPoint slide, which revealed to the participant a representation of the display akin to that shown in Figure 1. Referring to the diagram, the experimenter explained that, “at the start of a trial, you will hear the letter, ‘A’, ‘B’,

‘C’, or ‘D’, and this will indicate to which of the four possible target locations you will redirect your gaze.” Continuing to make reference to the diagram, the experimenter informed the participant that they would then hold their gaze at that location for approximately two seconds, after which they would hear a ‘ping’ sound, and that this ‘ping’ indicated that they return their gaze to the home position. The experimenter went on to describe each of the conditions in greater detail: “In the first phase of the experiment, you will be making the ‘to-the-target’ and

‘back-to-base’ eye movement pattern with the lights on and while the target markers are visible.”

With respect to condition two, participants were instructed to perform the task in, “exactly the same manner”, but were informed that the location markers would no longer be present: “In response to the auditory cue sounded at the beginning of a trial, it is asked that you make an eye movement to what you remember to be the corresponding marker location.” The experimenter went on to inform the participant about the specific manipulations comprising conditions three and four. Specifically, the participant was told that the lights would be turned off in condition

14 three, and that they would remain so until the conclusion of the experiment. Additionally, the experimenter informed the participant that in fourth and final condition, they would be asked to perform the task with their eyelids closed. The experimenter emphasized to participants that it was important they refrain from moving their head throughout the testing session, and that they try and limit blinking to the inter-trial intervals. Finally, the participant was asked if they understood the instructions, and if they had any questions regarding what was required of them.

As the participant progressed through the experiment, the instructions specific to each condition were reiterated.

EyeLink Setup & Calibration. Before initiating the experiment, the height of workstation’s chair and the chin-and-forehead rest were adjusted to optimally position the participant’s right eye in view of the EyeLink’s camera. However, care had to be taken so that the EOG reference electrodes did not come in contact with the forehead rest, and if necessary participants were instructed to only use the chinrest. We used the EyeLink system’s centroid (as opposed to the alternative ellipse-based) pupil tracking algorithm for all participants and testing sessions. The EyeLink system’s auto threshold method was used to determine the parameters for optimal PCR detection. Further, the experimenter always took care to ensure the camera’s lens was in optimal focus.

The experimenter calibrated the EyeLink prior to administering the training protocol and testing in conditions two and three (each to be described shortly). Aside from a notification message and the requirement of a key-press for initiation, the first task condition followed directly from the training protocol. Accordingly, EyeLink calibration was not performed again at this time. For the purpose of condition three, calibration was performed with the lights off, but while the display remained on. The calibration procedure comprised a series of sequentially

15 appearing fixation points that a formed 3  3 (nine point) grid. Horizontally, the grid spanned the same central region of the display across which the marker stimuli were distributed. Vertically, the center of the fixation points were separated by a length of 10.5°. For the initial calibration procedure, the experimenter instructed the participant that a series of fixation dots would sequentially appear on the display, and that as each dot appeared, their job was to steady their gaze on its center to the best of their ability. Participants were further instructed to press the space key once they had optimally fixated the dot. The key press accepted the fixation for that calibration point and also triggered the appearance of the next calibration point in sequence. A validation step comprising this same series of events was performed for each calibration session.

Although a reasonable effort was made to obtain an error of less than or equal to 0.50° for each calibration point, the ultimate criterion we required was that the average error not exceed 0.50°.

Calibration and validation was repeated until this criterion was met. Because the calibration and validation steps were usually performed multiple times during the initial calibration session, participants were familiar with the task requirements in subsequent calibration sessions. Thus, no further instructions were generally provided, as the participant knew exactly what was required of them.

EOG Calibration & Training Protocol. The EOG calibration and training protocol was completed once prior to condition one, but after the initial EyeLink calibration. The protocol comprised a series of trials that were blocked by target location. These trials served to provide the data from which the EOG calibration factor was derived (i.e. the coefficient for transforming millivolts into degrees of visual angle) and to create an association for the participant between each target location and its respective auditory cue. All saccades in the EOG calibration and training protocol were executed to visible targets while the room remained illuminated.

16 Participants completed a block of five or eight trials1 for each of the A, B, C, and D horizontal targets (in that order). Afterward, participants similarly completed a block of five or eight trials for four additional markers positioned across the vertical meridian at 10.50°, 5.25°, −5.25°, and

−10.50º relative to the home position. From top to bottom, these were signaled by auditory cues

‘W’, ‘X’, ‘Y’, and ‘Z’, respectively. Unlike A, B, C, and D, the vertical markers were strictly used for calibration purposes and did not constitute target locations during the actual experiment.

For both the horizontal and vertical calibration phases, the start of each letter block was marked by a clearing of the display, and in place of the markers appeared (in pale yellow) the letters corresponding to their respective auditory cues. The letter corresponding to the set of trials that was to follow ‘flashed’ (was emboldened and then unemboldened) three times. Because the ordinal relationship of the letters’ alphabetic indices mirrored their left-to-right spatial sequence in identifying the marker locations, it was straightforward for participants to form an association between each of the letters and their respective targets.

Task Conditions. Each participant was tested in four conditions and in a fixed order. On each trial, participants heard an auditory cue indicating one of the four locations, which was their signal to move their eyes as rapidly and accurately as possible to the indicated target location.

After attempting to fixate the target, participants were told to hold their gaze at that position for two seconds. After a 2000 ms pause, a ‘ping’ was sounded, informing the participant to return their gaze to the home base position. Once at the home position, there was a 3000 ms delay

1 We initially used five calibration trials per target location. After testing five participants we realized we had enough time in the testing session to increase these trials to eight. This larger number helped to improve the quality and reliability of the EOG calibration.

17 before a key press could be registered to initiate the next trial in sequence. Otherwise, trials were self-paced, with each trial beginning when the participant pressed the space key.

The four conditions are summarized in Table 1. Each condition comprised 40 pseudo- randomly selected trials, with the constraints that there were ten trials per target location and no more than three consecutive trials at the same location. In condition one, participants performed the task with all the visual markers visible throughout the test. Hereafter, we refer to this as the

‘Markers’ condition. In condition two, which we refer to as the ‘No Markers’ condition, participants performed the task under the same visibility conditions, with the exception that there were no longer any visual markers on the display. This meant that participants had to rely on their memory of the target locations, based on their experience executing saccades during the

EOG calibration and training protocol, as well as during the Markers condition. In the third condition participants performed the task in darkness (the ‘Darkness’ condition), and in condition four, in darkness and with their eyelids closed (the ‘Eyes-Closed’ condition). Lighting was not restored until testing was complete in the Darkness and Eyes-Closed conditions. To remind them of the target locations, participants were shown a one-second display of the visual markers just prior to testing in conditions two through four.

EyeLink Data Preprocessing

All EyeLink data preprocessing was performed using customized software programmed using MATLAB R2018b (The MathWorks, Natick, MA, USA). The native EyeLink data files

(*.edf) were loaded into MATLAB using the Ed2Mat software package developed by Etter &

Biedermann (2018). For positional information, we analyzed the tracker’s gaze-based data. The gaze-based data is a representation of where the participant’s gaze is located with respect to the

18 display2, and its raw form is given as x and y pixel coordinates. We took advantage of the

EyeLink’s online event parser to classify saccades, fixations, and blinks in the tracker’s signal. In addition to classifying the events, the parser also provides the end user with highly detailed information regarding these events, such as their start and end time, their average and peak velocity, duration, and so forth. Additionally, the event parser registered the timestamps of messages sent from the display PC, such as trial start and end times. The parser classified saccades by implementing an algorithm that took into account eye velocity, acceleration, and movement. For the purpose of our experiment, the event parser was configured to register saccades with a velocity threshold of 30º/sec, an acceleration threshold of 8,000º/sec2, and a movement threshold of 0.15º. Blink events are registered whenever the pupil is temporarily lost from the tracker’s signal. Fixations are the ‘default’ event and reflect the fact that neither a saccade nor blink has been registered by the parser’s algorithm (Personal Communications,

Kurt Debono [SR Research], July 2nd, 2020).

In spite of having the EyeLink’s event parser at our disposal, we were still tasked with deciphering which gaze events—among the many that may comprise a single trial—indexed the three events critical to determining when the participant believed they had acquired the target:

(1) the gaze sweep to the target, (2) the landing point of the target gaze sweep; and (3) the gaze sweep back to the central position. To detect these three events on each trial, we developed a custom algorithm that identified the saccade marking the onset of the target sweep, the saccade marking the onset of the home sweep, and the location of the landing point on the display. The

2 In the eye-tracking literature, this is sometimes referred to as a world-centered reference frame. It is important that researchers distinguish between eye position data that is reported in a world-centred reference frame from that reported in a head-centred reference frame. The failure to make this distinction has contributed to conflicting definitions and conceptualizations of oculomotor phenomena within the eye-tracking literature (Hessels et al., 2018).

19 input to this algorithm consisted of parameters defining position and movement thresholds.

However, these threshold parameters were defined flexibly enough to allow for the possibility that the target sweep might not be initiated from the central position, and that the home sweep might not be initiated from the target location. Once the algorithm defined the onset saccade marking these two gaze sweeps (target, home), the average gaze position of the longest fixation event within these bounds was taken as the target landing point. Moreover, to maximize the number of valid trials that could be retained and included in the dataset, we also had to account for gaze patterns that did not conform to the expected behavioural sequence of events. We provide a detailed overview of the mechanics and properties of this algorithm in the following section entitled, “Complex-and-Multievent Gaze Pattern Parser.”

The amplitude of saccades was computed using the following set of equations,

퐺푋퐸푁퐷 − 퐺푋푆푇퐴푅푇 푋퐴푀푃 = (1) (푈푋퐸푁퐷 + 푈푋푆푇퐴푅푇)/2

퐺푌퐸푁퐷 − 퐺푌푆푇퐴푅푇 푌퐴푀푃 = (2) (푈푌퐸푁퐷 + 푈푌푆푇퐴푅푇)/2

2 2 푋푌퐴푀푃 = √푋퐴푀푃 + 푌퐴푀푃 (3)

where (1) XAMP and YAMP represent, in degrees of visual angle, the eye movement size along the display’s horizontal and vertical dimensions, respectively; (2) G represents, in pixels, the position of gaze corresponding with an eye movement event; (3) U represents, in pixels-per- degree, the angular resolution of gaze at position G; (4) the subscripts associated with G and U denote the dimension (horizontal/x or vertical/y) and whether it is the start or end of an eye movement event; and (5) 푋푌퐴푀푃 represents the two-dimensional amplitude of the eye movement.

20 Eye blink artifacts tended not to pose any problems for this analysis, because they had already been clearly identified by the PCR tracker’s event parser. Blinks did, however, at times interfere with the preprocessing algorithm’s ability to classify the target sweep and base sweep onset saccades. This was solved by passing control of the algorithm to the user (here, the author), as described below. There were other events that led us to discard trials altogether. Trials were discarded if the loss of the pupil and corneal-reflection by the tracker resulted in too few samples from which to calculate the outcome variables of interest (n = 20), the participant failed to execute a target sweep (n = 4), or the participant executed a target sweep in the wrong direction without correction (n = 4). We detected and removed outliers separately at the cell level of each individual participant. A conservative threshold of four median absolute deviations (MAD; Leys et al., 2013) was used for outlier screening. With respect to outlier removal, two additional criteria were set. First, we required that the removal of an outlier (or set of outliers) change the cell mean by at least one unit. Secondly, for every missing data point already present within a data cell, the MAD criterion for outlier status was increased by an additional 0.5 units. In total, there were 34 outliers detected and removed from the dataset, which when considered in addition to the invalid trials described above, left 96.6% of the data remaining. Of note, all outliers determined based on the horizontal error of the target landing point were also classified as outliers for each of the other outcome variables.

Complex-and-Multievent Gaze Pattern Parser

The EyeLink’s event parser provides the end user with detailed information on each eye movement that is registered during a given trial. It identifies saccades, fixations, and blinks, and well provide information regarding these events, such their start and end time, their amplitude (if relevant), peak velocity, and so forth. However, without further processing, the information on

21 individual eye movement events cannot alone reveal complex gaze patterns that reflect a particular event sequence. For example, the output of individual events in the form of fixations and saccades cannot directly represent the acquisition of a target at some location a, or the departure from the target at location a and subsequent shift in gaze toward a target at some location b.

We are faced with the challenge of finding a method to identify the events marking the gaze location when the participant (1) shifts their gaze from the home position to acquire the signaled target; (2) settles on a landing point representing the target location; and (3) shifts their gaze to return to the home position. In order to accurately characterize these three events, it was also important to detect irregular events, that is, when the participant’s oculomotor behaviour did not conform with the instructions. We next describe this algorithm.

The operation of the algorithm is best illustrated using Figure 2, which shows the horizontal and vertical gaze trajectories of a randomly chosen participant (P030) for a single trial in each unique condition-by-target pairing. This figure was prepared using the raw sample-by- sample data and so can be used to demonstrate how the raw data samples map onto the events classified by the EyeLink’s event parser. Moving from top to bottom, each panel in Figure 2 represents an A, B, C, and D trial, respectively. This is delineated by the labels presented along the ordinate of the figure’s righthand side. From left to right, each column of panels corresponds to the Markers, No Markers, and Darkness conditions, as indicated by the column headers. The abscissa of each panel represents the elapsed time from trial onset — the moment the auditory cue is sounded. Trials have been trimmed at 3.20 seconds only for the ease of illustration, in order to focus on the important features of the gaze behaviours in this task. The ordinate of each panel represents the position of gaze, with the solid line representing horizontal gaze position,

22 and the dash-dot line representing vertical gaze position. The dashed line that horizontally spans the width of each panel signifies the trial’s corresponding target location. The thick, solid blue line indicates the landing point as determined by our classification algorithm.

Consider first the upper-left panel (Figure 2a). The onset of the trial begins with the participant’s gaze fixated at 0° (the central position). After approximately 0.6 seconds have elapsed, a saccade is executed, rapidly shifting horizontal gaze position to approximately −20.0°.

This primary saccade is hypometric, meaning it undershoots the goal location. Accordingly, a secondary corrective saccade is executed, bringing the participant’s gaze closer to the marker’s centre at −21.0°. Fixation is then maintained at this position for nearly one second, and is followed by a small centripetal saccade that is rapidly countered by another small saccade. At the

2.0 second mark, the return ‘ping’ is sounded, and by 2.4 seconds, a return saccade is executed toward the home position. In this case, the return saccade is hypermetric, meaning it overshoots the goal location. This overshoot is abruptly corrected, and the trial terminates with the participant’s gaze steadied at the home base marker. The corresponding events for this trial, as classified by the EyeLink’s event parser, are summarized in Table 2. This trial represents a straightforward scenario with ideal oculomotor behaviour matching what is expected. Figure 2b similarly represents such a trial. Less ideal behaviour and ‘problem’ trials are depicted in other panels within the figure. We will refer to these as appropriate as we detail the mechanics of our classification algorithm.

The principle goal of the algorithm was to identify the target landing point on each given trial. The algorithm first looks for a saccade marking the onset of a shift in gaze toward the trial’s defined target. The threshold amplitude for identifying the onset saccade starts relatively high at

8.2°, decreasing iteratively to a minimum of 2.0° until a saccade is either detected or not

23 detected. The rationale is to ignore small, involuntary fixational saccades that are better attributed to attentional orienting (Engbert & Kliegl, 2003) or gaze stabilization (Engbert &

Kliegl, 2004). To take into account that the gaze position prior to the target onset saccade may not actually be in the central position, the onset saccade is defined as one that redirects gaze to a position that is 6.5° less (A and B trials) or greater (C and D trials) than centre. As previously noted, determining the onset saccade of the target sweep can be complicated by unexpected gaze behaviour by the participant. For example, the participant may not actually leave home base, or the participant may make a false start (i.e., redirecting their gaze to an incorrect location, sometimes even in the wrong direction). The algorithm classified false starts as being target incongruent (i.e., in the wrong direction, as illustrated in Figure 2c) or target congruent (i.e., in the correct direction but to the wrong location, as illustrated in Figure 2d). Incongruent false starts were easy to flag as such, because a saccade in the direction opposite of the goal could not mistakenly be classified as the onset of a target sweep. Congruent false starts posed greater uncertainty for the algorithm. To handle these, the algorithm flagged trials in which a second onset saccade occurred after a home sweep onset saccade had been identified. The home sweep onset saccade was determined in the same fashion as the target sweep saccade, with the exception that the parameters were adjusted to reflect that the saccade moved in the opposite direction. For trials involving target locations A and B, the onset saccade was detected when gaze was repositioned to be 6.5° greater than the A and B locations, respectively. For C and D trials, the algorithm looked for the opposite pattern.

The algorithm was semi-supervised, meaning that if an anomaly was detected, such as the failure to find either a to-the-target or back-to-base onset saccade, or if it detected multiple gaze sweeps associated with a congruent false start (as illustrated in Figure 2d), control was passed

24 over to the user (here, the author), who made a judgment about the events. Following the detection of an anomaly, the user was prompted to consider either (1) resetting the bounds of the search space for the landing point, or (2) accepting the landing point already classified by the algorithm. In both cases, the user referred to the fixation event of longest duration in order to reclassify the boundaries of the saccade defining the target sweep and the home sweep. Using these criteria, the landing point was then defined as average gaze position of the fixation of longest duration occurring after the saccade initiating the target sweep and before the saccade initiating the home sweep. These points are illustrated by Figure 3, which shows the algorithm’s output and request for user input for the trial depicted in Figure 2d.

EOG Data Processing

The electrooculograms for two participants (P021 and P903) were unavailable due to technical difficulties encountered during testing. For the available thirteen (digitized) electrooculograms, we performed a visual inspection of the data to determine if the signal-to- noise ratio was of sufficient magnitude. In no case did the electrooculograms associated with vertical eye movements permit the identification or extraction of eye movement features. The horizontal recording of one participant (P023) was determined to be corrupted, but all others were considered for further processing and analysis (n = 12). The EOG signals were high-pass filtered with a cut off of 0.15 Hz and then low-pass filtered with a cut off of 25 Hz. This filtering is performed in order to correct for baseline drift (Barbara et al., 2020) and artifacts arising from eyeblinks, muscle and neural activity, and mechanical noise (Gunawardane et al., 2019).

However, it is inevitable that some level of noise, often high, remains present within the signal, which is why developing improved EOG filtering techniques remains an active area of investigation (Gunawardane et al., In Preparation).

25 The position of eye gaze estimated by EOG should be understood, in a technical sense, as an eye-in-head reference system, meaning that the head is the point of reference from which the measurement of amplitude is estimated. However, when the head is kept stationary by using chin and/or head support, and when calibration is performed with respect to an external reference point, the frame of reference becomes effectively world-centred. We derived the EOG calibration factor for each participant by regressing the known visual angle of each calibration point

(relative to the home position) onto the recorded EOG signal in mVs. For calibration trials, the change in the EOG’s standing potential associated with the angular displacement of the eye was determined by the same means used for the test trials after unit conversion. The first step to our approach followed the same logic as that described for the EyeLink signal, and involved defining the bounds of the temporal window interposing the target and home gaze sweeps. This was done using MATLAB’s findchangepts function, which is available from the Signal Processing

Toolbox®. The algorithm implemented by this function is based on the statistical principles described by Killick et al. (2012). We then estimated the landing point amplitude by taking the average of the signal’s local maxima (i.e. peaks) within the sampling window bound by the target and base gaze sweeps. Refer to Figure 4, which graphically demonstrates this procedure for a representative participant (P024). The top row of panels (Figures 4a–4e) represent trials from condition three (Darkness), while the bottom row of panels (Figures 4f–4h) represent trials from condition four (Eyes-Closed). From left to right, each panel corresponds to an A, B, C, and

D trial, respectively. In each panel the raw EOG signal (light blue) is overlain by the filtered

EOG signal (dark blue). The two vertically dashed lines represent the target and base gaze sweeps. The red rings mark the peaks of the signal within the fixation window. The thin red line

26 represents the mean of the peaks, which is what defined the horizontal position of gaze at the target landing point.

Because overall the EOG data was of much poorer quality than that provided by the

EyeLink—and expectedly so—a less conservative approach to outlier classification and removal was taken. Indeed, there were many trials for which the sign (left/A-B vs. right/C-D) of the EOG determined landing point disagreed with that indicated by the EyeLink data. We did not consider these values to be representative of the actual state of affairs, and so discarded them (n = 234).

This comprised 17.29% of data points for condition one (Markers), 9.79% of data points for condition two (No Markers), 11.46% of data points for condition three (Darkness), and 11.67% of data points for condition four (Eyes-Closed).

Data Analysis

EOG Correction Models. Analyses for deriving and statistically quantifying the quality of the EOG correction models were carried out in MATLAB 2018b using the fitlm function. We first found the set intersection of the landing point amplitude data available for both the EyeLink and EOG, and then performed robust regression analyses using Huber’s M-estimation (1992) and iteratively reweighted least squares (Holland & Welsch, 1977). The tuning constant for M- estimation was set to 1.345 (for additional details, see Holland & Welsch, 1977). There were two steps to our approach. We first demonstrate the validity of the statistical approach that was taken by examining the corrected EOG signal in the Darkness condition where it can be directly compared against EyeLink. We then proceed to obtain the corrected EOG landing point amplitudes for the Eyes-Closed condition. In both cases, a correction function for each participant (n = 13) was obtained by regressing the EyeLink-based landing point amplitudes against the corresponding data points of the EOG signal.

27 Comparing EyeLink, and the Corrected and Uncorrected EOG Outcomes. Only participants for whom we were able to obtain statistically significant correction models were included within these analyses (n = 11). For the Markers, No Markers, and Darkness conditions, these comparisons reflect the set intersection of the available/retained data for the EyeLink and

EOG datasets (85.1% of trials). The model-corrected estimates of the landing point amplitudes were directly obtained from the EOG Eyes-Closed data, so no further processing to ensure that the data points matched was required. Inferential statistics were carried out in the R Environment for Statistical Computing (Version 3.6.3, R Core Team, 2020) using repeated measures analysis of variance (rmANOVA) as implemented by the Afex package (Version 0.27-2, Singmann et al.,

2020). The pairwise t-tests and associated 95% confidence intervals were computed using the base R t.test function. The very nature of our experiment gave rise to unequal variances among the groups, so the violation of sphericity should be assumed on the reader’s part. If the degrees of freedom associated with F-statistic are reported as non-integer values, then this means that Mauchly’s test (1940) was performed and yielded a significant result (i.e. assumption of sphericity violated), and the degrees of freedom and corresponding p-values were adjusted using

Greenhouse-Geiser’s correction (Greenhouse & Geisser, 1959). If the degrees of freedom are reported as integer values, this reflects the fact that analysis was a single df comparison and a test of sphericity was therefore unnecessary. In all cases, we report the estimated effect size as omega squared with 90% confidence intervals, which were computed using the R-based MOTE package

(Buchanan et al., 2019). For rmANOVA, we report 90% instead of the standard 95% confidence intervals in consideration of the fact that the F-test is always a one-sided test (Lakens, 2013).

First, we conducted a 2 (Tracker: EOG and EyeLink)  3 (Condition: 1–3)  4 (Target: A–D) rmANOVA aggregated across trials with the landing point amplitude as the dependent variable.

28 We then performed this analysis for the closed-eyes condition so the design was refined to a 2

(Model vs. EOG)  4 (Target: A–D). We then repeated these same analyses, but with absolute positional error (i.e. | target location − landing point | ) as the outcome variable. Where stated, significant interactions or main effects were followed up with single df contrasts.

29 Results

Corrected EOG Signal

Overview. We begin by discussing the similarities and dissimilarities of the EyeLink and

EOG data. The reader is referred to Figure 5, a grid of box-and-whisker plots that compares the

EyeLink- and EOG-based measures of the mean target landing positions observed in the

Markers, No Markers, and Darkness conditions, as well as the original and corrected EOG measures characterizing the Eyes-Closed condition. Consider for the moment only Figures 5a–

5c; we will return to the Eyes-Closed condition (Figure 5d) shortly. Extreme observations, marked by a dot, are values that are 1.5 times the interquartile range from the top or bottom of the box. One important characteristic of the data, which is common to both the EyeLink and

EOG measures, is the positive skew present in the No Markers (Figure 5b) and Darkness (Figure

5c) conditions. This positive skew means that there was a strong tendency for participants to overshoot the goal location once visual feedback was removed. Moreover, there are far more extreme observations present within the EOG data compared to that of EyeLink. Contrasting the

No Marker and Darkness conditions (Figures 5b and 5c) against the Markers condition (Figure

5a), we see that the while this skew characterizes the EOG measurements, it is much reduced in the EyeLink data. This observation highlights the difference in measurement precision between the two eye tracking devices: PCR optical eye tracking (EyeLink) provides much greater precision compared to EOG (Cyton Board). In contrast, the overall difference in accuracy is not as marked. Even so, it remains evident in the fine details of the data, but is not well captured by the boxplot, which only conveys the global features of the data. Next, we describe how we harnessed the simultaneous measurement of eye position with infrared-based PCR eye tracking

30 and EOG in order to develop a model—or more precisely, models—intended to provide a better estimate of gaze position during eyelid closure.

Corrected Darkness EOG. To first validate our method of estimating a corrected EOG signal based on its paired used with an infrared-based PCR eye tracking system, we tested whether the regression models relating EOG and EyeLink in the No Markers condition could be applied to the Darkness condition. We did this by asking whether the corrected EOG signal was more similar to the EyeLink data than the original, uncorrected EOG signal. A correction model for each participant was obtained by regressing the EyeLink-based landing point amplitudes in the No Markers condition against the corresponding EOG data points. The EOG corrected data was then obtained by inputting the EOG uncorrected Darkness data of each participant into their respective models. The results of the regression analyses and associated EOG correction models are presented in Table 4. To correct for the familywise error rate, the p-values associated with the t-statistics were adjusted using the Bonferroni procedure. As shown within the table, the variability among individuals in the estimated beta coefficients is quite high, with no clear pattern emerging. In fact, this variability indicates that that the EOG signal can either be negatively or positively correlated with the EyeLink signal, which underscores why we computed a separate correction factor for each participant.

Only participants for whom the Darkness correction factor (ß0) had a significant fit were included in the subsequent analyses. We first assessed if the model-based correction for the

Darkness EOG signal differed from the corresponding uncorrected EOG signal. To do this, we conducted a set of paired-sample t-tests, which compared the corrected and uncorrected EOG- based measures of landing site amplitudes at each target location. The reported 95% confidence intervals correspond to the mean of differences. For target location A, the correction shifted the

31 mean upward by 12.18°, 95% CI [1.32, 23.04], from −41.65° (SD = 102.41) to −29.47° (SD =

48.32), t(103) = 2.22, p = .029, d = .22. For target location B, the correction led to the mean being increased by 9.31°, 95% CI [−3.74, 22.36], from −33.55° (SD = 129.06) to −24.22° (SD =

63.36), t(101) = 1.42, p = .160, d = .14. For target location C, the mean was shifted by −7.96°,

95% CI [−11.87, −4.04], from 27.38° (SD = 33.18) to 19.33° (SD = 17.52), t(99) = 4.03, p <

.001, d = .40. Finally, for target location D, the correction factor decreased the mean by −5.31°,

95% CI [−8.10, −2.52], from 31.77° (SD = 24.00°) to 26.39° (SD = 18.00), t(105) = 3.78, p <

.001, d = .37. Taken together, the model-based correction resulted in a significant mean difference in landing position for target locations A, C, and D, but not for location B.

The test of validity of the calibration procedure depended on whether the corrected EOG landing point measures in the Darkness condition were now more similar to those of EyeLink. If there was a smaller difference between the corrected EOG signal and EyeLink compared to the uncorrected EOG signal and EyeLink, then we could have greater confidence in applying this procedure to estimate the corrected EOG measures in the Eyes-Closed condition. To perform this comparison, we followed the same approach as above by conducting a set of paired-sample t-tests for each of the target locations.

With respect to target location A, the mean of differences for the Darkness landing point amplitudes as registered by EOG corrected (M = −29.47, SD = 48.32) compared to EyeLink (M =

−26.96, SD = 11.94) was 2.39°, t(103) = 0.51, p = .613, d = .05, 95% CI [−6.95 11.73]. For target location B, the mean of the differences at 8.75°, 95% CI [−3.57, 21.07], between EOG corrected (M = −24.22, SD = 63.36) and EyeLink (M = −15.39, SD = 7.35) was also statistically indistinguishable, t(101)= 1.41, p = .162, d = .14. For target location C, contrasting EOG corrected (M = 19.33, SD = 48.32) against EyeLink (M = 20.97, SD = 8.93) revealed a negligible

32 effect of measurement type on landing position, t(99) = 0.91, p = .365, d = .09. The mean of differences was 1.54°, 95% CI [−4.92, 1.83]. Finally, for target location D, the comparison of

EOG corrected (M = 26.39, SD = 16.23) against EyeLink (M = 28.44, SD = 7.51) also yielded a negligible effect of measurement type on landing position, t(105) = 1.164, p = .247, d = .11. The mean of the differences was −1.98°, 95% CI [−5.34, 1.39]. Although for target locations A, B, and D there had already existed a non-significant difference between the EyeLink and EOG measures, the EOG corrected measures were adjusted so as to be even more similar to EyeLink.

As for target location C, a significant mean difference between EOG uncorrected (M = −33.55,

SD =129.06) and EyeLink, t(99) = 2.08, p = .039, d = .21, was rendered statistically indistinguishable following correction. Additionally, in all cases, the correction led to a reduction in variance that provided a more similar value to that of EyeLink. These results, which are summarized by Figure 6, support our methodology as a credible means to estimate a corrected

EOG signal in the Eyes-Closed condition – one that is both more accurate and precise.

Corrected Eyes-Closed EOG. A summary of the regression analyses and EOG correction models are presented in Table 4. Just as we did in the validation step, the p-values associated with the t-statistics were adjusted using the Bonferroni procedure. The Eyes-Closed condition landing point amplitudes obtained via EOG were entered into their respective models to estimate the corrected values. Consider now the Eyes-Closed data presented in Figure 5d. The number of extreme observations in the original EOG data exceeds that of the EOG corrected data. The model-correction has “reined in” the mean landing position for each target location, as revealed by a set of paired t-tests. For target location A, the mean was adjusted from −26.73° (SD =

15.24) to −21.05° (SD = 13.10), t(93) = 3.27, p = .002, d = .34. This corresponds with a mean difference equal to −5.67, 95% CI [−9.11, −2.23]. For target location B, the mean was adjusted

33 from −21.53° (SD = 17.34) to −16.13° (SD = 12.03), t(96) = 3.49, p = .001, d = .35, EMD =

−5.40, 95% CI [−8.46, −2.32]; for target location C, from 27.58° (SD = 22.93) to 21.07° (SD =

12.02), t(96) = 3.74, p < .001, d = .38, EMD = 6.51, 95% CI [3.06, 9.97]; and for target location

D, from 36.79° (SD = 29.11°) to 28.70° (SD = 18.00), t(107) = 3.73, p < .001, d = .36, EMD =

8.09, 95% CI [3.79, 12.39].

Accuracy (Error) of Landing Points Across Conditions

For the participants for whom the Eyes-Closed model correction factor (ß0) had a significant fit, we performed a 2 (Tracker: EOG and EyeLink)  3 (Condition: Markers, No

Markers, Darkness)  4 (Target Location: A-D) rmANOVA with the dependent variable as the absolute (horizontal) positional error of the landing position relative to the target. The omnibus test of the rmANOVA yielded a significant main effect of Condition, F(1.22, 12.19) = 6.76, MSE

= 270.82, p = .019, 휔2 = 0.39, 90% CI [0.04, 0.68], and a significant main effect of Tracker,

F(1,10) = 9.71, MSE = 149.85, p = 0.11, 휔2 = .44, 90% CI [0.06, 0.74]. We followed up the main effect of Tracker by running single df comparison of EyeLink and EOG separately for conditions one through three. In the Markers condition, the mean EyeLink measure of positional error (M = 0.72, SD = 0.56) was significantly different from that measured by EOG (M = 5.84,

SD = 8.32), F(1,10) = 14.17, MSE = 10.18 p = .004, 휔2 = .54, 90% CI [0.13, 0.80]. In the No

Markers condition, the difference between EyeLink (M = 4.22, SD = 4.47) and EOG (M = 8.16,

SD = 8.84) was also significant, F(1,10) = 6.54, MSE = 13.05, p = .028, 휔2 = .33, 90% CI [0.00,

0.67]. In the Darkness condition there was no significant difference in positional error as measured by EyeLink (M = 7.85, SD = 7.66) compared to EOG (M = 12.88, SD = 15.77),

2 F(1,10) = 3.08, MSE = 45.08 p = .110, 휔 = .16, 90% CI [0.00, 0.52]. We didn’t follow-up the main effect of Condition here, as we reserve a detailed analysis of this effect for another

34 statistical model (described next) that excludes Tracker as a factor, and where we treat the EOG corrected landing errors as the EyeLink equivalent in the Eyes-Closed condition.

We next eliminated Tracker as a factor and treated the model-corrected EOG measures as an EyeLink analog for the Eyes-Closed condition. Thus, our statistical model reflected a 4

(Condition)  4 (Target Location) design. We again employed rmANOVA with the absolute positional error of the landing position as our dependent variable. The omnibus test yielded a significant main effect of Condition on error, F(1.42, 14.24) = 9.92, p = .004, MSE = 138.83,

휔2 = .54, 90% CI [0.17, 0.75]. We followed this up with a set of planned comparisons contrasting the condition pairings of Markers and No Markers, No Markers and Darkness, and

Darkness and Eyes-Closed. We did not consider target-condition interaction effects given the absence of any interaction in the omnibus test. These analyses are graphically summarized by the set of hybrid box-and-scatter plots of Figure 6. The ordinate represents the mean absolute horizontal positional error of the landing position in degrees of visual angle, while the abscissa identifies the condition. The group-level mean for each condition’s absolute horizontal landing error is marked by the diamond. The associated error bars represent the within-subject standard errors. Individual mean landing errors are represented by data points scattered to the right of each condition’s respective box.

Landing point error during the acquisition of remembered marker locations in the No

Markers condition (M = 4.22, SD = 4.47) was—unsurprisingly—significantly greater than that observed when visual Markers (M = 0.72, SD = 0.56) were present, F(1, 10) = 6.97, MSE =

38.60, p = .025, 휔2 = .35, 90% CI [0.01, 0.68]. Further, positional error was significantly less in the No Markers condition compared to Darkness (M = 7.85, SD = 7.66), F(1,10) = 14.81, MSE =

2 19.61, p = .003,  = .56, 90% CI [0.14, 0.80]. Finally, contrary to our expectation, performance

35 during Eyes-Closed (M = 9.31, SD = 6.51) did not significantly differ from that observed in

Darkness, F(1,10) = 0.34, MSE = 137.86, p = .573, = 0.00, 90% CI [0.00, 1.00]. Taken together, the evidence suggests that eyelid closure did not further diminish performance in target acquisition above and beyond the absence of visuospatial feedback.

When focusing our analysis of landing point error in the Markers, No Markers, and

Darkness conditions on the full set of available EyeLink data (n = 15), we observed an

2 appreciably higher omnibus effect of Condition, 휔 = .65, 90% CI [0.36, 0.81], compared to that of the more restricted analysis described above, n = 11, 휔2 = .48, 90% CI [0.08, 0.76]. This prompted us to consider whether our experiment was sufficiently powered to detect an effect in the Eyes–Closed condition. In light of previous reports suggesting greater saccadic error during eyelid closure compared to darkness (e.g., see Shaikh et al., 2010, p. 1669, Figure 2a), we used

G*Power (Faul et al., 2009) to compute the post-hoc power of the overall experiment while assuming a medium effect size of f = .25 (Cohen, 1988). With a sample size of n = 11, and accounting for the fact that there were four repeated measures, the power of the experiment was shown to be very small, with 1− ß = 0.08. We used an epsilon (i.e. sphericity correction) value of

0.65 in line with the results of our omnibus test. If we consider just the Darkness × Eyes-Closed contrast—still assuming a medium effect size—the estimated post-hoc power is only marginally increased to 0.12.

36 Discussion

Aim One: Calibrating EOG Measures with Simultaneous PCR Recordings

The primary aim of this thesis was to develop a methodology for tracking closed eye movements using EOG—one that was more precise and accurate than relying on the raw signal. We acquired the EOG signal using a low cost and lightweight consumer-grade biosensing system, namely, OpenBCI’s Cyton Biosensing Board. Electrodes were positioned in order to record both horizontal and vertical eye movements. Our methodology of deriving a correction factor for the

EOG signal was achieved by comparing the EOG signal with an industry standard infrared-based

PCR eye tracking device—namely, the EyeLink 1000® (SR Research Ltd. 2009).

The first step in our process involved simultaneously measuring the eye movements of participants as they attempted to acquire remembered target locations in the No Markers and

Darkness conditions (described in the Procedure). Importantly, we synchronized the tracking devices using Lab Streaming Layer. After completing data collection, we visually inspected the electrooculograms to assess their signal-to-noise ratio and determine which of them could be used for further processing and analysis. In doing so, we observed that the vertical signal in each electrooculogram was too corrupted to permit any meaningful identification or extraction of eye movement features. The corruption consisted of a substantial degree of noise and a marked cross-channel interdependency in which the vertical signal was effectively ‘drowned out’ by the horizontal signal. It is likely that the signal amplifier integrated within the Cyton Biosensing

Board is not sufficiently powerful to allow the collection of quality data from the vertical component of the EOG.

Because the vertical electrooculograms we had intended to collect were ultimately unavailable, this prevented us from computing the two-dimensional amplitude of the eye

37 movements for the purpose of comparing the EOG signal to that of the EyeLink. Additionally, we were unable to account for the vertical EOG component in our models deriving a correction factor. It was important that we also consider the vertical component of the saccadic sweeps because even if a goal location geometrically only requires a unidimensional translation of gaze, saccadic sweeps rarely, if ever, follow perfectly a trajectory along a single spatial dimension

(Walker et al., 2006). Similarly, if there is any positional error, it will comprise both vertical and horizontal components (Abegg et al., 2010). This is particularly true of memory-guided saccades.

Indeed, Figure 8 shows a substantive amount of vertical position error and this error increased as visuospatial feedback decreased in the transition from (1) the Markers to the No Markers condition, and (2) from the No Markers to the Darkness condition. Although this prevented a complete description of the oculomotor behaviour associated with the task in each condition, we expect that the general principles we’ve describe for the EOG correction procedure would be similar once these problems are resolved.

After screening and processing the data, we first performed a validation step by computing participant-specific correction factors for the Darkness condition. We obtained these correction factors by regressing the EyeLink measured target landing positions of the No

Markers condition onto the corresponding No Markers EOG measures. In this manner, we were able to compare the corrected Darkness EOG measures with the known EyeLink signal. If the corrected Darkness EOG measures yielded means that were more comparable to those of

EyeLink than the uncorrected EOG measures, it would the use of the same statistical procedure to estimate a corrected EOG signal in the Eyes-Closed condition.

After applying the correction to the Darkness EOG signal, the mean landing position amplitudes were significantly shifted for three of the four target locations. The next step was to

38 determine if, and to what extent, the corrected EOG signal yielded landing position estimates that were now more similar to EyeLink. Indeed, the correction yielded means that were more similar to EyeLink for each of the target locations, therefore establishing the procedure as a valid means to estimate a corrected EOG signal in the Eyes-Closed condition. We next carried out this same process in order to estimate EOG corrected signal for the Eyes-Closed condition. However, for this step, the correction models were obtained by regressing the Darkness uncorrected EOG landing points onto the corresponding (Darkness) EyeLink landing points. We then entered the uncorrected Eyes-Closed data into these correction-models, thereby yielding a corrected estimate of the landing point data for the Eyes-Closed condition. For each target location, the corrected data was of lower variability and the means had been “reined in”. We therefore used this corrected form of the Eyes-Closed EOG data to quantify the effect of eyelid closure on saccadic accuracy.

There are some limitations that merit consideration. First, our participants were not fully dark adapted, nor did we perform a new calibration step, before testing in the Darkness condition. These are important considerations because the corneo-retinal potential is sensitive to changes in illumination (Arden et al., 1962; Hetter, 1970). Decrements in illumination cause a gradual decrease in the corneo-retinal potential over a 15-minute period (Constable et al., 2017).

Because we only calibrated the EOG signal once during the training protocol (as described in the

Method section), a shift in the amplitude of the signal’s wave form could have led us to underestimate the amplitude of the target sweep landing positions.

A second limitation to consider is that we used linear regression when the raw data were not normally distributed. The positive skew of landing end-points in the Markers, No Markers, and Darkness conditions reflected the hypermetric nature of the human saccade-generating

39 system when the visual representation of a target location is absent. Violating the assumption of normality in linear regression analyses can lead to imprecise estimates of the beta coefficients

(the slopes) and the p-values, leading to a liberal bias in interpretation. Imprecise estimates of the slopes could have liberally biased our correction factors. Further, because we used the test of significance (i.e. the p-value) to determine which participants’ data we would use to characterize the overall estimated EOG corrected landing points, imprecise p-values could have potentially biased which data was used to carry this out and demonstrate the utility of our proposed methodology. All that said, we used regression analyses that are robust to violations of these kinds, as described by Huber (1992). What’s more, even ordinary least squares regression has been shown to be robust to violations of non-normality, which includes high deviations in both skew and kurtosis (Cain et al., 2017).

A third limitation that should be noted is the procedure by which we computed the initial

EOG calibration curves used to transform the signal from mVs to degrees of visual angle.

Considering the tendency for participants to overshoot the remembered target location in the No

Markers, Darkness, and Eyes-Closed conditions, having included additional calibration points extending beyond the range of our targets would have allowed for a more precise and accurate initial estimate of the calibration factor, especially for eye movements with amplitudes of 21°–

30°. Also, if instead of assuming that the eye movements were accurate during the EOG calibration protocol, we had leveraged the EyeLink data in order to calibrate the EOG signal, we would have improved our confidence in the data.

The reason we did not calibrate the EOG in this manner was to accommodate another goal of this research project, which involved developing a novel EOG filtering technique described elsewhere (Gunawardane et al., In Preparation). That said, it could still be done, but it

40 would necessitate that the data be preprocessed from the beginning and analyzed again, a time- consuming exercise. Also, if we had done this, we would still need to perform the EOG corrections that are based on the EyeLink data, which is the central focus of this work.

Aim Two: Understanding the Role of Visual Feedback for Eye Movements

The second aim of this thesis was to investigate how the availability of visuospatial feedback influenced oculomotor performance, and in doing so to build on previous work investigating this matter (e.g., Allik et al., 1981; Antrobus et al., 1964; Becker & Fuchs, 1969;

Becker & Klein, 1973; Hess et al., 1985; Skavenski, 1971). Specifically, we compared the accuracy of eye movements made to target locations under four conditions: (1) Markers: a fully illuminated room with target markers present, (2) No Markers: a fully illuminated room with target markers absent, (3) Dark: complete darkness, and (4) Eyes-Closed: complete darkness and with the eyelids closed. Conditions 2–4 (No Markers, Darkness, and Eyes-Closed) each required that participants execute eye movements to remembered target locations, and similarly, to return their gaze to a remembered home position.

Of principle interest was the precision and accuracy with which participants could reproduce gaze sweeps to remembered target locations during eyelid closure. The principle analyses leveraged the corrected EOG signal (details summarized in the above section) for the

Eyes-Closed condition and treated it essentially as a PCR tacker (EyeLink) analog.

A visual representation of the results is depicted by Figure 7. They show that, in line with previous reports (Israël, 1992; Israël & Berthoz, 1994; Shaikh et al., 2010), landing point error increased as participants progressed from the Marker to No Marker, and No Marker to Darkness conditions. These results were also consistent with previous reports that memory-guided saccade

41 end-points tend to overshoot the goal location, both in darkness and with environmental visual cues available (Becker & Klein, 1973; Israël & Berthoz, 1994).

The effect size on horizontal landing point error associated with the transition from the

No Markers to Darkness condition (휔2 = 0.56) was more pronounced than the transition from the

Markers to No Markers condition (휔2 = 0.35). However, at least part of this discrepancy in effect sizes can be attributed to a significant linear decay in participants’ memory of the target locations as trials progressed throughout the No Markers condition. In other words, a difference in performance would have already been apparent within the No Markers condition alone if one were to compare early and late trials. This was revealed by an analysis that was exclusive to the full set of the EyeLink data. Briefly, we ran a 3 (Condition)  4 (Target Location)  10 (Trial

Number) rmANOVA, which revealed a significant Condition  Trial interaction with respect to landing point error. In following this up, we found this effect was being driven by the progression of trials during the No Markers condition. Subsequent visualization of the data suggested the presence of a linear trend. This was confirmed by a linear trend analysis showing a significant Condition by TrialLinear interaction, F(2, 28) = 6.52, MSE = 63.17, p = .005, and a significant interaction contrast that isolated the trend to the No Markers condition, t(14) = 3.95, p

= .002, d = 1.02. By contrast, error remained stable throughout the Darkness condition, with no apparent additional effect of memory decay. However, it should also be noted that the full model was rank deficient, and so sphericity tests and corrections were not available.

Regarding the results of principle interest—saccadic accuracy during eyelid closure—we were surprised to find that horizontal error in the Darkness was statistically indistinguishable

Eyes Closed conditions. This is at odds with an earlier, albeit descriptive, report by Allik et al.

(1981). It also appears inconsistent with the data presented by Shaikh et al. (2010), though it is

42 not clear from their report how ‘irregular trajectory’ was operationalized, and how this maps onto endpoint error or accuracy. One possible interpretation is that sacaddic error has a more prominent vertical component when the eyes are closed, which as noted in the previous section, we were unable to account for here. A simpler conclusion we might draw from the data is that gaze landing accuracy when the eyes are closed is, indeed, not significantly different than when the eyes are open but in darkness. The most likely explanation, however, is that the experiment was indaquately powered to detect the effect.

There are a few limitations woth noting that should be adressed by future studies. Most important is that we did not account for errors that may have occurred when participants returned their gaze to the home position. If, for example, a participant had returned their gaze to home position, but had actually landed three degrees to the right or left, and assuming they maintained steady fixation at this position, they may have well subsequently executed a gaze sweep on the next trial which in relative terms could either have had greater or less error than what we had measured. In other words, we made the simplifying assumption that in the case of the No

Markers, Darkness, and Eyes-Closed coinditions, pariticipants were (1) relatively accurate in their ‘back-to-base’ gaze sweeps, and (2) once having returned to the home position, pariticpants were good at maintaining fixation (i.e. no significant fixational drift). Nevertheless, there is good reason to believe that these issues, though likely present to some extent, were overall of little signficance. Regarding the return of gaze, it it has been shown that centripetal saccades toward the primary position (i.e. bioncular central fixation) are much more accurate than cetrifugal saccades that shift gaze away from central fixation (Kapoula & Robinson, 1986). This is due to a recentering bias, and naturally tendency for the eyes to return to the primary positon (Camors et al., 2016; Donaldson, 2000). Regarding fixational drift, the principle of the recentering or

43 straight ahead bias similarly applies. Skavenski and Steinman (1970; 1971) reported that participants showed good fixational stability while in complete darkness. More recently, Hüfner et al. (2009) reported that during eye-closure, participant’s gaze did not deviate from the home position by any more than ±1.0°. Moreover, it has also been shown that the programming of voluntary saccades in complete darkness can account for positional offsets caused by fixational drift (Becker & Klein, 1973; Matin et al., 1970), which would obviate any concern regarding drift as a confounding variable in the first place. Nevertheless, evidence to the contrary has also been reported (Poletti & Rucci, 2011), and there may exist considerable inter-individual variability in the degree to which observers can maintain stable fixation in the absence of a visual target (Becker & Klein, 1973; Hess et al., 1985).

Taken together, it would be prudent for future studies of eye movements in the dark or with the eyelids closed to take into account (1) positional errors in return eye movements to the home position and (2) fixational drift, which may have occurred during inter-trial intervals or target fixation. This could be accomplished by modifying the experimental design so that the

Markers and No Markers trials were intermixed, or by counterbalancing the order of the No

Markers, Darkness, and Eyes-Closed conditions, following the initial Markers condition. Either of these designs would help to disambiguate the effects of visuospatial feedback decrements from memory decay on saccadic accuracy (Gnadt et al., 1991; Wimmer et al., 2014). An even more efficient design might encompass an initial intermixed Markers and No Markers condition, but one that systematically varied the number of trials before a transition from visually- to memory-guided target acquisition. This would measure the longer-term effects of target memory on oculomotor performance (in contrast to the traditionally administered, 1–5 second delay, memory-guided saccade paradigm, e.g., Israël, 1992). A subsequent counterbalanced delivery of

44 the Eyes-Closed and Darkness conditions would further control for carry over effects, such as fatigue and motivation (Bahill & Stark, 1975; Becker & Fuchs, 1969; Keppel & Wickens, 2004).

However, it will also be important that illumination induced changes to the corneo-retinal potential be accounted for. More specifically, the transition from normal illumination to darkness will necessitate a recalibration of the EOG signal once 15 minutes have elapsed and the signal has stabilized (Heide et al., 1999). Finally, and perhaps most importantly, future investigations should better quantify how the error or gain of eye movements during lid closure can vary in both the vertical and horizontal dimensions.

Aim Three: An Algorithm for Determining Target Acquisition

The third aim of this investigation was to develop an algorithm that could classify when a participant believed they had acquired a target. The algorithm also had the ability to detect when participants made ‘false starts,’ defined as irregular anticipatory eye movements that were either congruent or incongruent with the target location. One of key features of the algorithm is that when it was uncertain about a particular outcome, it passed control over to the user (here, the author), who then made a judgement call.

In hindsight, we realized that we could have simplified our measurements by having the participant make a keypress when they believed they were fixating the target. Nevertheless, we are optimistic that the eye movement features classified by the algorithm in the present study

(e.g., target sweep, target acquisition, home sweep,) can be scaled up to include more complicated ‘gaze gestures.’ Gaze gesture is a term of emerging importance in research on human factors and usability, where it is relevant to EOG-based human-machine interfaces such as wheel chair guidance systems (Barea Navarro et al., 2018) and eyes-closed password entry for smartphones (Findling et al., 2019). The idea of a gaze gesture is quite simple. Briefly, a

45 preprogrammed sequence of eye movements—say, ‘left-left-right’—can be recognized and classified by an algorithm (bundled, of course, with some type of eye tracker) to instantiate some action outcome (e.g., wheelchair steering). Gaze gestures have successfully been employed in the development of smartphone assistive technologies for persons with motor disabilities (e.g.,

Zhang et al., 2017), a gaze-based camera control system for use by surgeons in the operating room (Fujii et al., 2018), and a drawing application (Heikkilä, 2013), to name a few examples.

This gaze gesture-like functionality of the algorithm means it can detect event sequences, rather than individual fixations and saccades. Expanding on this algorithm’s functionality could assist in understanding more complex gaze behaviour. For example, the algorithm could be adapted to classify when an individual has scanned a certain number of items. In closing, we are optimistic that this tool can be extended to help advance the study of complex visual behaviour as it unfolds outside the laboratory, both when study participants are exploring the natural world around them, and when they are exploring their inner world, as in dreaming.

46 Tables

Table 1. Condition Parameters

Condition Eyelids Open Targets Visible Ambient Light Measures

Markers Yes Yes Yes (~560 Lux) EyeLink + EOG No Markers Yes No Yes (~560 Lux) EyeLink + EOG Darkness Yes No No (~1 Lux) EyeLink + EOG Eyes-Closed No No No (~1 Lux) EOG Note. EOG, electrooculography. EyeLink, infrared pupil-corneal reflection eye tracking device.

47 Table 2. Example of Event Data Provided by the EyeLink’s Event Parser

Gaze Position (Pixels) Event Event Duration Index Type (ms) StartX EndX StartY EndY 1 Fixation 600 958.90 959.50 617.40 602.50 2 Saccade 112 959.20 638.20 602.50 599.29 3 Fixation 172 643.09 645.70 598.50 597.09 4 Saccade 16 643.70 623.50 597.40 595.09 5 Fixation 940 625.50 628.00 594.59 595.09 6 Saccade 16 629.59 648.40 595.20 595.29 7 Fixation 224 646.79 645.79 595.00 588.40 8 Saccade 8 642.90 634.59 588.20 592.50 9 Fixation 256 632.40 636.29 592.50 593.50 10 Saccade 96 636.70 1020.60 617.40 602.50 11 Fixation 120 1017.60 998.59 602.50 599.29 12 Saccade 20 997.59 960.29 598.50 597.09 13 Fixation 740 960.50 957.20 597.40 595.09 Note. This table represents the event data corresponding to the raw sample-by-sample data depicted in

Figure 2a. The initial saccade at index two marks the onset of the to-the-target gaze sweep. The saccade at index ten is classified by the algorithm as the saccade marking the onset of the return-to-base gaze sweep. They are shaded in light grey to mark the bounds of the events that the algorithm will search through in order to identify the landing point. The average gaze position (data not shown) of the fixation event of longest duration that falls within these bounds is how this classification is performed. This event, at index five, has been shaded in light red.

48 Table 3. EyeLink Calibration Data

Markers No Markers Darkness

Error (°) Error (°) Error (°)

PID Mean Max Mean Max Mean Max

P019 0.43 0.87 0.48 0.93 0.27 0.50 P020 0.32 0.45 0.32 0.49 0.25 0.43 P021 0.31 0.49 0.23 0.41 0.19 0.45 P022 0.29 0.44 0.35 0.45 0.27 0.44 P023 0.27 0.85 0.30 0.51 0.32 0.48 P024 0.29 0.52 0.24 0.52 0.31 0.47 P025 0.29 0.47 0.30 0.49 0.27 0.51 P026 0.20 0.44 0.35 0.45 0.20 0.38 P027 0.27 0.49 0.23 0.50 0.18 0.38 P028 0.24 0.65 0.18 0.44 0.17 0.96 P029 0.23 0.38 0.31 0.58 0.28 0.49 P030 0.26 0.46 0.29 0.45 0.26 0.41 P032 0.30 0.54 0.31 0.42 0.40 0.49 P033 0.32 0.54 0.29 0.36 0.36 0.47 P034 0.27 0.78 0.26 0.56 0.27 0.72 P035 0.61 1.08 ------0.70 2.07 P903 0.23 0.60 0.23 0.44 0.22 0.44

Avg. 0.27 0.53 0.28 0.47 0.26 0.50 Note. Participants P019 and P035 (rows shaded) were excluded from

the analyses, and their data is not reflected in the averages reported at

the bottom of the table.

49

Table 4. Summary of Regression Analyses to Derive EOG Correction Factors

Darkness Correction Models Eyes-Closed Correction Models

2 2 ID 0 t-statistic 1 R Conversion Function 0 t-statistic 1 R Conversion Function

P020 1.06 27.43*** -6.61 0.95 푦̂ = −6.61 + 1.06푥 1.08 27.51*** -8.33 0.95 푦̂ = −8.33 + 1.08푥

P022 0.59 21.75*** 2.23 0.93 푦̂ = 2.23 + 0.59푥 0.38 5.69*** -0.35 0.48 푦̂ = −0.35 + 0.38푥

P024 1.10 23.41*** -1.87 0.94 푦̂ = −1.87 + 1.10푥 1.22 23.07*** 2.06 0.93 푦̂ = 2.06 + 1.22푥

P025 1.25 20.79*** 4.91 0.93 푦̂ = 4.91 + 1.25푥 1.04 15.14*** 6.65 0.87 푦̂ = 6.65 + 1.04푥

P026 2.03 23.91*** -1.61 0.94 푦̂ = −1.61 + 2.03푥 2.35 27.75*** -1.19 0.95 푦̂ = −1.19 + 2.35푥

P027 0.84 18.03*** 1.32 0.90 푦̂ = 1.32 + 0.84푥 0.32 5.11*** 2.08 0.44 푦̂ = 2.08 + 0.32푥

P028 0.52 7.64*** -1.32 0.69 푦̂ = −1.32 + 0.52푥 0.70 9.73*** 0.08 0.74 푦̂ = 0.08 + 0.70푥

P029 1.20 21.72*** 3.29 0.93 푦̂ = 3.29 + 1.20푥 1.07 24.47*** 3.35 0.94 푦̂ = 3.35 + 1.07푥

P030 0.95 21.49*** -1.65 0.93 푦̂ = −1.65 + 0.95푥 0.87 25.46*** 2.22 0.94 푦̂ = 2.22 + 0.87푥

P032 0.24 3.80* -1.96 0.35 푦̂ = −1.96 + 0.24푥 0.46 4.84** 4.05 0.52 푦̂ = 4.05 + 0.46푥

P033 0.77 9.09*** -1.33 0.69 푦̂ = −1.33 + 0.77푥 0.66 8.56*** -3.75 0.69 푦̂ = −3.75 + 0.66푥

P034 0.49 14.85*** -2.74 0.85 푦̂ = −2.74 + 0.49푥 0.05 2.96 4.71 0.21 푦̂ = 4.71 + 0.05푥

Note. Robust regression analyses were performed using Huber’s method of iteratively reweighted least squares. The tuning constant for M-estimation was set to 1.345 in line with the work done by Holland and Welsch (1977) showing that this maintains 95% asymptotic efficiency. To control for the familywise error rate, p-values were adjusted using the Bonferroni correction procedure.

*** p ≤ .001, ** p ≤ .01, * p ≤ .05

50 Figures

Figure 1. Spatial Layout of Stimulus Display

Shows the home position (center) and four marker positions (labeled here as A, B, C, D but the labels were not seen by participants). The arrows and markers indicating the screen distances were also not visible to participants.

51 Figure 2. Representative Traces of Gaze Position for EyeLink Data

Traces of gaze position as estimated by EyeLink for a representative subject (P030) for each condition and target location. Columns represented conditions and rows represent target locations.

Solid lines in each panel represents the position of gaze in the horizontal dimension, while the dash- dot line represents the position of gaze in the vertical dimension. Target locations are indicated by the dashed lines. Thick blue lines indicates the fixation event that was classified as the target landing point.

52 Figure 3. Supervised Nature of the Gaze Landing Point Classification Algorithm

Index Event Duration (ms) StartX EndX 1 Fixation 512 -958.70 -953.79 2 Saccade 40 -954.00 -1071.60 3 Fixation 116 -1071.20 -1068.80 4 Saccade 40 -1068.60 -949.00 5 Fixation 128 -950.59 -951.90 6 Saccade 56 -951.40 -1258.00 7 Fixation 176 -1256.00 -1247.50 8 Saccade 16 -1248.20 -1271.50 9 Fixation 736 -1270.90 -1275.50 10 Saccade 12 -1276.10 -1293.90 11 Fixation 80 -1293.70 -1288.80 12 Saccade 16 -1287.20 -1265.40 13 Fixation 348 -1268.00 -1268.40 14 Saccade 60 -1268.50 -982.90 15 Fixation 128 -984.09 -984.70 16 Saccade 20 -983.50 -956.79 17 Fixation 280 -959.20 -957.79

Condition 1 Subject P030, Trial D, Number 4

Multiple gaze sweeps detected. The current to–the–target and back–to–base onset indices are... GzStart: 2 GzReturn: 4

Do you wish to change the index bounds for determining the gaze landing point (Y/N)? Y

Please indicate valid event index representing the GzStart value: 6 Please indicate valid event index representing the GzReturn value: 14

The new index bounds that have been selected are... GzStart: 6 GzReturn: 14

Proceed with these values (Y/N)? Y

Because a to-the-target onset saccade (index six) has been detected as occurring after a return to base onset saccade (index four), the user is prompted to verify that the bounds within which the algorithm will search for the gaze landing point. Given the currently defined search bounds, you can see that there a subsequent to–the–target and back–to–base eye movement pattern present in the event data. The user looks to see if this contains a fixation event with a duration exceeding that of the currently defined bounds. Indeed, event index nine represents a fixation with a longer duration, so the bounds are adjusted to reflect this.

53 Figure 4. EOG Signal Processing and Feature Identification

The EOG signal processing and feature identification approach for estimating the target landing point is shown for a representative participant (P024). The top panels (a–d) depict the signal from the Darkness condition. The bottom panels (e–h) depict the signal from the

Eyes-Closed condition. Trials with target locations A, B, C, and D are represented column- wise from left to right. The filtered EOG signal (dark blue) is superimposed on the raw signal (light blue). Dashed vertical lines represent the signal change points corresponding with (left) the onset of the target sweep (right) the onset of the home sweep. The average of the local maxima, or peaks (marked by the red annuli), was taken as the landing point.

This value is marked by the thin horizontal red line. The green lines above and below the red line represent one standard deviation (±).

54 Figure 5. Comparison of EyeLink, EOG, and EOG Corrected

Box plots illustrate the comparison of the EyeLink and EOG landing point amplitudes in the (a)

Markers, (b) No Markers, and (c) Darkness conditions. The bottom-right panel (d) contrasts the original and EOG corrected landing point amplitudes for the Eyes-Closed condition. The mean landing position for each condition-measure pairing is represented by the diamond. Note that 0° marks the home position and that target locations A, B, C, and D are denoted by the dashed lines positioned at −21.0°, −10.5°, 10.5°, and 21.0°, respectively. Extreme observations, which are marked by a dot, are data points 1.5 times the interquartile range above or below the vertical edges of the box. ELink = EyeLink.

55

Figure 6. Validation Step

Box-and-whisker plots of the horizontal landing points amplitudes in the Darkness

condition as measured by EyeLink (grey), EOG Corrected (blue), and EOG

uncorrected (gold). The diamonds mark the mean landing position for each unique

measure-by-target pairing. See the main text for additional details. ELink =

EyeLink.

*** p ≤ .001, * p ≤ .05

56

Figure 7. Horizontal Landing Error Across Conditions

Absolute horizontal landing error (collapsed across target location) as measured by

EyeLink (Markers, No Markers, Darkness) and EOG corrected (Eyes-Closed). The

group-level means are marked by the diamond, while the small grey dots adjacent to the

box plots denote the mean landing point error at the level of indvidual participants (n =

11). The error bars about the means depict the Cosineau-Morey-O’Brien within-subject

standard errors (Cousineau & O’Brien, 2014).

W

57

Figure 8. EyeLink Measured Vertical Landing Error Across Conditions

Mean absolute vertical landing position error for the full set (n = 15) of EyeLink data.

Error bars depict the Cosineau-Morey-O’Brien within-subject standard errors

(Cousineau & O’Brien, 2014).

58 References

Abegg, M., Lee, H., & Barton, J. J. S. (2010). Systematic diagonal and vertical errors in

antisaccades and memory-guided saccades. Journal of Eye Movement Research, 3(3),

Article 3. https://doi.org/10.16910/jemr.3.3.5

Allik, J., Rauk, M., & Luuk, A. (1981). Control and sense of eye movement behind closed

eyelids. Perception, 10(1), 39–51. https://doi.org/10.1068/p100039

Antrobus, J. S., Antrobus, J. S., & Singer, J. L. (1964). Eye movements accompanying

daydreaming, visual imagery, and thought suppression. The Journal of Abnormal and

Social Psychology, 69(3), 244–252. https://doi.org/10.1037/h0041846

Arden, G. B., Barrada, A., & Kelsey, J. H. (1962). New clinical test of retinal function based

upon the standing potential of the eye. British Journal of Ophthalmology, 46(8), 449–

467. https://doi.org/10.1136/bjo.46.8.449

Aserinsky, E., & Kleitman, N. (1953). Regularly occurring periods of eye motility, and

concomitant phenomena, during sleep. Science, 118(3062), 273–274.

https://doi.org/10.1126/science.118.3062.273

Aserinsky, E. (1971). Rapid eye movement density and pattern in the sleep of normal young

adults. Psychophysiology, 8(3), 361–375. https://doi.org/10.1111/j.1469-

8986.1971.tb00466.x

Bahill, A. T., & Stark, L. (1975). Overlapping saccades and glissades are produced by fatigue in

the saccadic eye movement system. Experimental Neurology, 48(1), 95–106.

https://doi.org/10.1016/0014-4886(75)90225-3

59 Barbara, N., Camilleri, T. A., & Camilleri, K. P. (2020). A comparison of EOG baseline drift

mitigation techniques. Biomedical Signal Processing and Control, 57, 101738.

https://doi.org/10.1016/j.bspc.2019.101738

Barea Navarro, R., Boquete Vázquez, L., & López Guillén, E. (2018). EOG-based wheelchair

control. In Smart Wheelchairs and Brain-Computer Interfaces (pp. 381–403). Elsevier.

https://doi.org/10.1016/B978-0-12-812892-3.00016-9

Becker, W., & Fuchs, A. F. (1969). Further properties of the human saccadic system: Eye

movements and correction saccades with and without visual fixation points. Vision

Research, 9(10), 1247–1258. https://doi.org/10.1016/0042-6989(69)90112-6

Becker, W., & Klein, H. (1973). Accuracy of saccadic eye movements and maintenance of

eccentric eye positions in the dark. Vision Research, 13(6), 1021–1034.

https://doi.org/10.1016/0042-6989(73)90141-7

Brodoehl, S., Klingner, C. M., & Witte, O. W. (2015). Eye closure enhances dark night

perceptions. Scientific Reports, 5(1), 10515. https://doi.org/10.1038/srep10515

Buchanan, E., Gillenwaters, A., Scofield, J., & Valentine, K. (2019). MOTE: Effect size and

confidence interval calculator. (1.0.2) [Computer software]. https://cran.r-

project.org/web/packages/MOTE/index.html

Bulling, A., Ward, J. A., Gellersen, H., & Tröster, G. (2011). Eye movement analysis for activity

recognition using electrooculography. IEEE Transactions on Pattern Analysis and

Machine Intelligence, 33(4), 741–753. https://doi.org/10.1109/TPAMI.2010.86

Cain, M. K., Zhang, Z., & Yuan, K.-H. (2017). Univariate and multivariate skewness and

kurtosis for measuring nonnormality: Prevalence, influence and estimation. Behavior

Research Methods, 49(5), 1716–1735. https://doi.org/10.3758/s13428-016-0814-1

60 Camors, D., Trotter, Y., Pouget, P., Gilardeau, S., & Durand, J.-B. (2016). Visual straight-ahead

preference in saccadic eye movements. Scientific Reports, 6(1), 23124.

https://doi.org/10.1038/srep23124

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). L. Erlbaum

Associates.

Constable, P. A., Bach, M., Frishman, L. J., Jeffrey, B. G., & Robson, A. G. (2017). ISCEV

standard for clinical electro-oculography (2017 update). Documenta Ophthalmologica,

134(1), 1–9. https://doi.org/10.1007/s10633-017-9573-2

Cousineau, D., & O’Brien, F. (2014). Error bars in within-subject designs: A comment on

Baguley (2012). Behavior Research Methods, 46(4), 1149–1151.

https://doi.org/10.3758/s13428-013-0441-z

Donaldson, I. M. L. (2000). The functions of the proprioceptors of the eye muscles.

Philosophical Transactions of the Royal Society of London. Series B: Biological

Sciences, 355(1404), 1685–1754. https://doi.org/10.1098/rstb.2000.0732

Du Bois-Reymond, E. H. (1884). Untersuchungen über thierische electricität. Verlag Georg

Reimer.

Duchowski, A. T. (2017). Eye tracking techniques. In A. T. Duchowski, Eye Tracking

Methodology (pp. 49–57). Springer International Publishing. https://doi.org/10.1007/978-

3-319-57883-5_5

Engbert, R., & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention.

Vision Research, 43(9), 1035–1045. https://doi.org/10.1016/S0042-6989(03)00084-1

61 Engbert, R., & Kliegl, R. (2004). Microsaccades keep the eyes’ balance during fixation.

Psychological Science, 15(6), 431–431. https://doi.org/10.1111/j.0956-

7976.2004.00697.x

Etter, A., & Biedermann, M. (2018). Edf2Mat (1.2) [C++, C, MATLAB]. University of Zurich.

https://github.com/uzh/edf-converter

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using

G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods,

41(4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149

Fenn, W. O., & Hursh, J. B. (1936). Movements of the eyes when the lids are closed. American

Journal of Physiology-Legacy Content, 118(1), 8–14.

https://doi.org/10.1152/ajplegacy.1936.118.1.8

Findling, R. D., Quddus, T., & Sigg, S. (2019). Hide my gaze with EOG!: Towards closed-eye

gaze gesture passwords that resist observation-attacks with electrooculography in smart

glasses. Proceedings of the 17th International Conference on Advances in Mobile

Computing & Multimedia, 107–116. https://doi.org/10.1145/3365921.3365922

Finsterer, J., & Zarrouk-Mahjoub, S. (2016). Leber’s hereditary optic neuropathy is multiorgan

not mono-organ. Clinical Ophthalmology, 10, 2187–2190.

https://doi.org/10.2147/OPTH.S120197

Frens, M. A., & Van der Geest, J. N. (2002). Scleral search coils influence saccade dynamics.

Journal of Neurophysiology, 88(2), 692–698. https://doi.org/10.1152/jn.00457.2001

Fujii, K., Gras, G., Salerno, A., & Yang, G.-Z. (2018). Gaze gesture based human robot

interaction for laparoscopic surgery. Medical Image Analysis, 44, 196–214.

https://doi.org/10.1016/j.media.2017.11.011

62 Gnadt, J. W., Bracewell, R. M., & Andersen, R. A. (1991). Sensorimotor transformation during

eye movements to remembered visual targets. Vision Research, 31(4), 693–715.

https://doi.org/10.1016/0042-6989(91)90010-3

Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data.

Psychometrika, 24(2), 95–112. https://doi.org/10.1007/BF02289823

Gunawardane, P.D.S.H., de Silva, C. W., & Chiao, M. (2019). An oculomotor sensing technique

for saccade isolation of eye movements using OpenBCI. 2019 IEEE SENSORS, 1–4.

https://doi.org/10.1109/SENSORS43011.2019.8956542

Gunawardane, P. D. S. H., MacNeil, R. R., Zhao, L., Enns, J. T., de Silva, C. W., & Chiao, M.

(In Preparation). A fusion algorithm for saccade eye movement enhancement with EOG

and lumped-elements.

Heide, W., Koenig, E., Trillenberg, P., Köempf, D., & Zee, D. (1999). Electrooculography:

Technical standards and applications. In G. Deuschl & A. Eisen (Eds.),

Recommendations for the practice of clinical neurophysiology: Guidelines of the

international federation of clinical physiology (EEG Suppl. 52) (2nd ed., pp. 223–240).

Elsevier Science B.V.

Heikkilä, H. (2013). EyeSketch: A drawing application for gaze control. Proceedings of the 2013

Conference on Eye Tracking South Africa - ETSA ’13, 71–74.

https://doi.org/10.1145/2509315.2509332

Hess, K., Reisine, H., & Dürsteler, M. (1985). Normal eye drift and saccadic drift correction in

darkness. Neuro-Ophthalmology, 5(4), 247–252.

https://doi.org/10.3109/01658108509004937

63 Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. C. (2018). Is the

eye-movement field confused about fixations and saccades? A survey among 124

researchers. Royal Society Open Science, 5(8), 180502.

https://doi.org/10.1098/rsos.180502

Hetter, G. P. (1970). Corneal-retinal potential behind closed eyelids. Archives of Otolaryngology

- Head and Neck Surgery, 92(5), 433–436.

https://doi.org/10.1001/archotol.1970.04310050015002

Holland, P. W., & Welsch, R. E. (1977). Robust regression using iteratively reweighted least-

squares. Communications in Statistics - Theory and Methods, 6(9), 813–827.

https://doi.org/10.1080/03610927708827533

Holmqvist, K., & Andersson, R. (2017). Eye tracking: A comprehensive guide to methods and

measures (2nd ed.). Lund Eye-Tracking Research Institute.

Hong, C. C.-H., Potkin, S. G., Antrobus, J. S., Dow, B. M., Callaghan, G. M., & Gillin, J. C.

(1997). REM sleep eye movement counts correlate with visual imagery in dreaming: A

pilot study. Psychophysiology, 34(3), 377–381. https://doi.org/10.1111/j.1469-

8986.1997.tb02408.x

Huber, P. J. (1992). Robust estimation of a location parameter. In S. Kotz & N. L. Johnson

(Eds.), Breakthroughs in Statistics (pp. 492–518). Springer New York.

https://doi.org/10.1007/978-1-4612-4380-9_35

Hüfner, K., Stephan, T., Flanagin, V. L., Deutschländer, A., Stein, A., Kalla, R., Dera, T., Fesl,

G., Jahn, K., Strupp, M., & Brandt, T. (2009). Differential effects of eyes open or closed

in darkness on brain activation patterns in blind subjects. Neuroscience Letters, 466(1),

30–34. https://doi.org/10.1016/j.neulet.2009.09.010

64 Hüfner, K., Stephan, T., Glasauer, S., Kalla, R., Riedel, E., Deutschländer, A., Dera, T.,

Wiesmann, M., Strupp, M., & Brandt, T. (2008). Differences in saccade-evoked brain

activation patterns with eyes open or eyes closed in complete darkness. Experimental

Brain Research, 186(3), 419–430. https://doi.org/10.1007/s00221-007-1247-y

Hutton, S. B. (2019). Eye tracking methodology. In C. Klein & U. Ettinger (Eds.), Eye

Movement Research: An Introduction to its Scientific Foundations and Applications (pp.

277–308). Springer International Publishing. https://doi.org/10.1007/978-3-030-20085-

5_8

Imai, T., Sekine, K., Hattori, K., Takeda, N., Koizuka, I., Nakamae, K., Miura, K., Fujioka, H.,

& Kubo, T. (2005). Comparing the accuracy of video-oculography and the scleral search

coil system in human eye movement analysis. Auris Nasus Larynx, 32(1), 3–9.

https://doi.org/10.1016/j.anl.2004.11.009

Irving, E. L., Zacher, J. E., Allison, R. S., & Callender, M. G. (2003). Effects of scleral search

coil wear on visual function. Investigative Ophthalmology & Visual Science, 44(5), 1933.

https://doi.org/10.1167/iovs.01-0926

Israël, I. (1992). Memory-guided saccades: What is memorized? Experimental Brain Research,

90(1), 221–224. https://doi.org/10.1007/BF00229275

Israël, I., & Berthoz, A. (1994). Saccades toward externally or internally acquired memorized

locations. In G. d’Ydewalle & J. Van Rensbergen (Eds.), Studies in Visual Information

Processing (Vol. 5, pp. 31–43). North-Holland. https://doi.org/10.1016/B978-0-444-

81808-9.50008-1

Keppel, G., & Wickens, T. D. (2004). Design and analysis: A researcher’s handbook. Pearson

Prentice Hall.

65 Killick, R., Fearnhead, P., & Eckley, I. A. (2012). Optimal detection of changepoints with a

linear computational cost. Journal of the American Statistical Association, 107(500),

1590–1598. https://doi.org/10.1080/01621459.2012.737745

Kapoula, Z., & Robinson, D. A. (1986). Saccadic undershoot is not inevitable: Saccades can be

accurate. Vision Research, 26(5), 735–743. https://doi.org/10.1016/0042-6989(86)90087-

8

Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A

practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863.

https://doi.org/10.3389/fpsyg.2013.00863

Ley, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard

deviation around the mean, use absolute deviation around the median. Journal of

Experimental Social Psychology, 49(4), 764–766.

https://doi.org/10.1016/j.jesp.2013.03.013

MacNeil, R., Che, H., & Khan, M. (2016). Human space exploration: Neurosensory, perceptual

and neurocognitive considerations. University of Toronto Medical Journal, 93(2), 19–26.

Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human-computer interaction. In

S. H. Fairclough & K. Gilleade (Eds.), Advances in Physiological Computing (pp. 39–

65). Springer London. https://doi.org/10.1007/978-1-4471-6392-3_3

Marg, E. (1951). Development of electro-oculography: Standing potential of the eye in

registration of eye movement. A.M.A. Archives of Ophthalmology, 45(2), 169–185.

https://doi.org/10.1001/archopht.1951.01700010174006

66 Marx, E., Deutschländer, A., Stephan, T., Dieterich, M., Wiesmann, M., & Brandt, T. (2004).

Eyes open and eyes closed as rest conditions: Impact on brain activation patterns.

NeuroImage, 21(4), 1818–1824. https://doi.org/10.1016/j.neuroimage.2003.12.026

Marx, E., Stephan, T., Nolte, A., Deutschländer, A., Seelos, K. C., Dieterich, M., & Brandt, T.

(2003). Eye closure in darkness animates sensory systems. NeuroImage, 19(3), 924–934.

https://doi.org/10.1016/S1053-8119(03)00150-2

Matin, L., Matin, E., & Pearce, D. G. (1970). Eye movements in the dark during the attempt to

maintain a prior fixation position. Vision Research, 10(9), 837–857.

https://doi.org/10.1016/0042-6989(70)90164-1

Mauchly, J. W. (1940). Significance test for sphericity of a normal n-variate distribution. The

Annals of Mathematical Statistics, 11(2), 204–209.

https://doi.org/10.1214/aoms/1177731915

McIntosh, R. D., & Buonocore, A. (2012). Dissociated effects of distractors on saccades and

manual aiming. Experimental Brain Research, 220(3–4), 201–211.

https://doi.org/10.1007/s00221-012-3119-3

Mogilever, N. B., Zuccarelli, L., Burles, F., Iaria, G., Strapazzon, G., Bessone, L., & Coffey, E.

B. J. (2018). Expedition cognition: A review and prospective of subterranean

neuroscience with spaceflight applications. Frontiers in Human Neuroscience, 12, 407.

https://doi.org/10.3389/fnhum.2018.00407

Nikoskelainen, E. K., Marttila, R. J., Huoponen, K., Juvonen, V., Lamminen, T., Sonninen, P., &

Savontaus, M. L. (1995). Leber’s “plus”: Neurological abnormalities in patients with

Leber’s hereditary optic neuropathy. Journal of Neurology, Neurosurgery & Psychiatry,

59(2), 160–164. https://doi.org/10.1136/jnnp.59.2.160

67 Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., &

Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior

Research Methods, 51(1), 195–203. https://doi.org/10.3758/s13428-018-01193-y

Peirce, J., & MacAskill, M. (2018). Building experiments in PsychoPy. Sage.

Peterson, V., Galván, C., Hernández, H., & Spies, R. (2020). A feasibility study of a complete

low-cost consumer-grade brain-computer interface system. Heliyon, 6(3), e03425.

https://doi.org/10.1016/j.heliyon.2020.e03425

Poletti, M., & Rucci, M. (2011). Absence of an extraretinal signal associated with ocular drift

affects saccade accuracy. Journal of Vision, 11(11), 557–557.

https://doi.org/10.1167/11.11.557

R Core Team. (2020). R: A language and environment for statistical computing. (3.6.3)

[Computer software]. R Foundation.

Robinson, D. (1963). A method of measuring eye movement using a scleral search coil in a

magnetic field. IEEE Transactions on Biomedical Electronics, 10(4), 137–145.

https://doi.org/10.1109/TBMEL.1963.4322822

Shaikh, A. G., Wong, A. L., Optican, L. M., Miura, K., Solomon, D., & Zee, D. S. (2010).

Sustained eye closure slows saccades. Vision Research, 50(17), 1665–1675.

https://doi.org/10.1016/j.visres.2010.05.019

Singmann, H., Westfall, J., Aust, F., & Ben-Shachar, M. (2020). Afex: Analysis of factorial

experiments (0.27-2) [Computer software].

Sprenger, A., Lappe-Osthege, M., Talamo, S., Gais, S., Kimmig, H., & Helmchen, C. (2010).

Eye movements during REM sleep and imagination of visual scenes: NeuroReport, 21(1),

45–49. https://doi.org/10.1097/WNR.0b013e32833370b2

68 SR Research Ltd. (2009). EyeLink® 1000 User Manual. SR Research Ltd. http://sr-

research.jp/support/EyeLink%201000%20User%20Manual%201.5.0.pdf

Skavenski, A. A., & Steinman, R. M. (1970). Control of eye position in the dark. Vision

Research, 10(2), 193–203. https://doi.org/10.1016/0042-6989(70)90115-X

Skavenski, A. A. (1971). Extraretinal correction and memory for target position. Vision

Research, 11(7), 743–746. https://doi.org/10.1016/0042-6989(71)90104-0

Wade, N. J. (2015). How were eye movements recorded before Yarbus? Perception, 44(8–9),

851–883. https://doi.org/10.1177/0301006615594947

Walker, R., McSorley, E., & Haggard, P. (2006). The control of saccade trajectories: Direction of

curvature depends on prior knowledge of target location and saccade latency. Perception

& Psychophysics, 68(1), 129–138. https://doi.org/10.3758/BF03193663

Wei, J., Chen, T., Li, C., Liu, G., Qiu, J., & Wei, D. (2018). Eyes-open and eyes-closed resting

states with opposite brain activity in sensorimotor and occipital regions:

multidimensional evidences from machine learning perspective. Frontiers in Human

Neuroscience, 12, 422. https://doi.org/10.3389/fnhum.2018.00422

Wimmer, K., Nykamp, D. Q., Constantinidis, C., & Compte, A. (2014). Bump attractor

dynamics in prefrontal cortex explains behavioral precision in spatial working memory.

Nature Neuroscience, 17(3), 431–439. https://doi.org/10.1038/nn.3645

Yang, H., Long, X.-Y., Yang, Y., Yan, H., Zhu, C.-Z., Zhou, X.-P., Zang, Y.-F., & Gong, Q.-Y.

(2007). Amplitude of low frequency fluctuation within visual areas revealed by resting-

state functional MRI. NeuroImage, 36(1), 144–152.

https://doi.org/10.1016/j.neuroimage.2007.01.054

69 Zhang, X., Kulkarni, H., & Morris, M. R. (2017). Smartphone-based gaze gesture

communication for people with motor disabilities. Proceedings of the 2017 CHI

Conference on Human Factors in Computing Systems, 2878–2889.

https://doi.org/10.1145/3025453.3025790

70