AN ABSTRACT OF THE THESIS OF

Rafael Robles for the degree of Master of Science in Psychology presented on May 30, 2019.

Title: The Lateralization of Emotional Perception.

Abstract approved: ______Frank Bernieri

There is consensus among researchers that some form of hemispheric lateralization exists when perceiving . However, conclusions regarding the exact organization have been inconsistent with some studies supporting an overall right hemisphere lateralization and others suggesting differential lateralization for positively and negatively valenced emotions. The main objective of this study was to examine the validity of experimental methodologies generating these conclusions in an attempt to resolve the inconsistencies found across previous works. In the current study, right handed participants (N = 90) completed three experiments testing for visual field biases when perceiving emotional faces as a proxy measure of lateralization. Participants completed a free-view emotional chimeric face task

(Experiment 1), a tachistoscopic version of this task (Experiment 2), and two divided visual field tasks (Experiment 3). A consistent left visual field bias (i.e., right hemisphere lateralization) was found when judging the emotional intensity of positive and negative emotions in both the free-view and tachistoscopic chimeric face tasks.

The divided visual field tasks of Experiment 3 uncovered an unexpected right visual

field bias (i.e., left hemisphere lateralization) for the emotional face recognition task.

These results suggest that judging emotional intensity and recognizing emotions may be distinct processes, though both are under the umbrella of “emotional perception.”

©Copyright by Rafael Robles May 30, 2019 All Rights Reserved

The Lateralization of Emotional Perception

by Rafael Robles

A THESIS

submitted to

Oregon State University

in partial fulfillment of the requirements for the degree of

Master of Science

Presented May 30, 2019 Commencement June 2019

Master of Science thesis of Rafael Robles presented on May 30, 2019.

APPROVED:

Major Professor, representing Psychology

Director of the School of Psychological Science

Dean of the Graduate School

I understand that my thesis will become part of the permanent collection of Oregon State University libraries. My signature below authorizes release of my thesis to any reader upon request.

Rafael Robles, Author

ACKNOWLEDGEMENTS

The author expresses sincere appreciation to Dr. Bernieri for the hours of meetings and editing, to the members of the Interpersonal Sensitivity Lab for your help and support (tangible and emotional), to Oregon State University’s School of Psychological Science for giving me this opportunity, and to my wife Kelly for supporting me through the many late nights and roadblocks along the way.

TABLE OF CONTENTS

Page

1 Introduction...……………………………………………………………………… 1

1.1 Lateralization of Facial Perception: From Identity to ...... 4

1.2 Is Facial Perception All Right? Right Hemisphere vs Hypothesis.8

1.3 Methodological Considerations……………………………………………9

1.4 Overview of the Current Study…………………………………………...14

2 General Methods.……………………………………………………….……...... 16

2.1 Participants………………………………………………………………16

2.2 Apparatus………………………………………………………………...17

2.3 Procedure……………………………………………………………...... 17

3 Experiment 1: Free-View Chimeric Face Task……………………………………21

3.1 Method ………………………………………..…………………………21

3.1.1 Free-view chimeric face task……………..….……………………21

3.2 Results………………………………………………………………...... 27

3.3 Discussion…………………………...... ………….30

4 Experiment 2: Sequential Chimeric Face Task……………………………………31

4.1 Method…….….….………………………………………………………31

4.2.1 Sequential chimeric face task……………………………………...31

4.2.2 Fixation-check...... ………………………………………….……..34

4.2 Results...... …...34

4.5 Discussion.…………………………………………………………….....36

5 Experiment 3: Divided Visual Field....…………………………………...... 36

TABLE OF CONTENTS (Continued)

Page

5.1 Method...... ………………………………………38

5.1.1 Word recognition task…………………………………………..38

5.1.2 task……………....………………………..41

5.1.3 Fixation-check….……………………………………………….44

5.2 Results...... ……………………………………………………………44

5.3 Discussion...……………………………………………………………..47

6 General Discussion...... 48

6.1 Chimeric Test of Lateralization...... 48

6.2 Order Effects in the Chimeric Face Task...... 50

6.3 Differential Lateralization in ...... 52

6.4 Limitations...... 54

6.5 Future Directions...... 56

7 Conclusion...... 59

Bibliography ………………………………………………………………………..61

Appendix A: Measures………………………………………………………………69

Appendix B: Instructions and Debriefing...... ……………………………74

LIST OF FIGURES

Figure Page

1. Simple visual field pathway adapted from Bourne (2006)...... ………………...... …2

2. Composite face conditions from Gilbert and Bakan (1973)…..…………………...5

3. Chimeric face example: Female right hemiface and male left hemiface...... ….....6

4. Campbell’s (1978) emotional chimeric faces...... ………………………………….7

5. Experimental chin rest example.....……………………………………………….12

6. Facial photos and edited chimeric displays...... 23

7. Free-view chimeric face task (Experiment 1) response screen...... ……………….24

8. Free-viewing chimeric bias quotient by order and valence…………………...... 29

9. Stimulus array for sequential chimeric face task (Experiment 2)...... ….…………33

10. Stimulus array for divided visual field word recognition task (Experiment 3).....40

11. Stimulus array for divided visual field emotion recognition task (Experiment 3)43

12. Divided visual field task accuracy between visual fields for...... 45 emotional face and word recognition tasks

LIST OF TABLES

Table Page

1. Measures of eligibility and lateralization………………………………….....20

2. Bias quotient scoring example…………………………………...... 26

1

The Lateralization of Emotional Perception

In 1861, Pierre Broca reported on a patient with damage to their left hemisphere that left them unable to produce speech. Four years later, after assessing

25 additional patients with similar damage and an unaffected patient with an equivalent lesion on the right hemisphere, Broca definitively claimed the left hemisphere was solely responsible for the production of speech (see Berker, Berker,

& Smith, 1986 for a translation of Broca's report). This was the first discovery in which the physically symmetrical hemispheres of the cortex were found to be functionally distinct. In the 150 years since, neurological and psychological research has continued to uncover instances of functions that are lateralized to (i.e., dominated by) one hemisphere over the other (Hugdahl, 2005). With each instance, the field of neuropsychology inches closer towards its ultimate goal, “the perfect mapping of the psychological states on cerebral states” (Pierre Feyereisen, 1991, p. 32).

As the field progressed, researchers created methodologies other than lesioning to uncover similar lateralized functions. One such methodology involved the selective presentation of information to the left or right visual fields. In order to do so, information was simply presented to either side of a central fixation point. The organization of the visual pathway results in information from each eye traveling through the optic nerve to a central optic chiasm. Here the information from each eye is combined and the left and right visual field are split and sent to contralateral hemisphere (Snowden, Thompson, & Troscianko, 2011). This results in the right

2

hemisphere processing any visual information appearing to the left of center and the left hemisphere processing any information to the right of center (see Figure 1 for a visualization of this pathway).

Figure 1. Simple visual field pathway adapted from Bourne (2006).

Controlled movements have also been found to have a contralateral neural organization similar to that of visual fields. In an early study, Penfield and Boldrey

(1937) applied electrical to the motor strip right of the cortex. When applied to the right hemisphere muscle movements occurred on the opposite side of the body. In other words, the process of moving one’s left arm originates in right hemisphere and right body movements originate in the left hemisphere.

3

The consistent organization of visual field and movement was used to study split-brain patients by researchers Sperry and Gazzaniga (see Gazzaniga, 1967 for overview). These split-brain patients had undergone a surgical operation to severe the nerve tract that facilitates communication between the left and right hemisphere (i.e., the corpus collosum). This resulted in two separate hemispheres without no method to communicate information to one another. This complete separation of hemispheres resulted in limited use the visual information depending on the visual field it was presented to.

In Gazzaniga and Sperry’s studies, split-brain patients completed multiple tasks in which they were asked to verbally report, write, draw or grasp items related to the information they were shown in each visual field (Gazzaniga, 1967; Gazzaniga

& Sperry 1967). A word or picture presented to the right visual field (left hemisphere) could be verbally reported or written with the right hand (left hemisphere), but could not be selected or drawn unless first said aloud. On the other hand, information presented to the left visual field (right hemisphere) could not be reported but could be selected tactilely or drawn with the left hand (right hemisphere). They posited that this was evidence for a left hemisphere lateralization of verbal processing abilities

(e.g., speaking and writing) and right hemisphere lateralization in nonverbal processing and spatial abilities (e.g., drawing and finding objects).

Evidence for the lateralization of information processing has also been observed in healthy patients by taking advantage of the same contralateral

4

organization of the visual system. For example, studies have been completed assessing verbal abilities using visual field separation with words (Geffen, Bradshaw,

& Wallace, 1971; Mishkin & Forgays, 1952; Orbach, 1967; Tremblay, Ansado,

Walter, & Joanette, 2007). In these studies, letters or words were presented to one visual field or the other for a short period of time and participants were asked to identify what they had seen. Even though these participants did not have split brains, performance was increased and reactions times were shorter when the words appeared in their right visual field. This, again, showed the same pattern of left hemisphere lateralization for verbal processes. However, not every study on lateralization focuses solely on verbal processing.

Lateralization of Facial Perception: From Identity to Emotion

Another area of in the lateralization field revolves around the perception of faces. Researchers Gilbert and Bakan (1973) were interested in understanding why one half of a person’s face appeared more natural and identifiable than the other. This question was first assessed by Werner Wolff (1933) who hypothesized that this difference in identifiability was due to physical differences of the face. Gilbert and Baken did not agree with this explanation. Instead, they hypothesized that the tendency to prefer one half of the face was due to the perceiver.

Gilbert and Baken’s (1973) study used edited composite facial stimuli consisting of isolated Right and Left halves of a face attached to a mirrored version of themselves (i.e., Right/Right and Left/Left faces). The composite photos were

5

presented along with a full-face photo and participants were asked to select the composite they believed to be most similar to the original photo. However, half of the participants were presented with a mirrored version of the original photo (see Figure

2 for a visual example of these differences). They found that, when presented with a normal full-face photo, participants selected the Right/Right composite, but when presented with the mirrored photo they selected the Left/Left composite. They concluded participants’ choices were dependent on the similarity to the information presented to their left visual fields. Therefore, facial identification was lateralized to the right hemisphere.

Figure 2. Composite face conditions from Gilbert and Bakan (1973)

Simultaneously, a study on split-brain patients led to the creation of another method to assess facial perception lateralization. The “chimeric” face paradigm was by combining the left half of a face from one photo with the right half of a face from a different photo (Levy, Trevarthen, & Sperry, 1972). For example, a chimeric photo

6

may consist of the right half of a young woman’s face combined and the left half of a man’s (see Figure 3). The split-brain patient would then identify a photo they believed looked most similar to the chimeric stimulus. Yet again, decisions were found to be dependent on the information that presented to the viewer’s left visual field (i.e., if shown the photo in the example they would report seeing a woman).

Figure 3. Chimeric face example: Female right hemiface and male left hemiface.

This chimeric face paradigm was later adapted for the assessment of facial emotions in normal populations by Campbell (1978). Rather than creating chimeric faces of multiple identities, Campbell created chimeric faces that included the left and right hemiface of the same person expressing different emotions. Specifically, these chimeric faces combined a smiling hemiface with a relaxed, neutral, hemiface (Figure

4).

7

Figure 4. Campbell’s (1978) emotional chimeric faces.

Note: (a) normal smiling face, (b) normal relaxed face, (c) left smiling chimeric, and

(d) right smiling chimeric.

A few years later, Levy, Heller, Banich, and Burton (1983) adapted

Campbell’s (1978) emotional chimeric faces to create a free-view chimeric face task.

This version of the experiment placed the pair of chimeric photos in a vertical column aligned along the center bisection and allowed participants to freely view the photos for as long as they wanted. They were then asked to decide which photo they found more expressive. As each trial consisted of the pairing of a face with its mirrored counterpart, the information from each pair was objectively identical. The only

8

difference between them was the side of the central bisection the emotive half fell in

(e.g., one visual field would see a non-emotive neutral hemiface while the other would see an emotional hemiface). Thus, participants’ selections would reflect which visual field, if any, they were biased towards when making their expressivity judgments. Ultimately, this bias was used as an indication that the opposite hemisphere was dominant for this emotional perception process.

Is Facial Perception All Right? Right Hemisphere vs Valence Hypotheses

As researchers have studied the lateralization of emotional perception, a debate has emerged as to whether all emotions are lateralized or only certain emotions. The crux of the debate centers on importance of positively valenced emotions. The right hemisphere hypothesis states that all emotional processing is lateralized to the right hemisphere and thus should show a bias towards information presented to the left visual field. This was found in Levy et al,’s (1972) study with split-brain patients, the first emotional chimeric studies on normal groups (Campbell,

1978; Heller & Levy, 1981; Ley & Bryden, 1979), and many studies since then

(Dimberg & Petterson, 2000; Hoptman & Levy, 1988; Innes, Burt, Birch, &

Hausmann, 2016; Luh, 1998; Wedding & Cyrus, 1986). Conversely, the valence hypothesis first described by Reuter-Lorenz and Davidson (1981) states that the right hemisphere is specialized for processing only negatively valenced emotions and the left hemisphere is specialized for processing positive emotions. Though not as prevalent, this hypotheses has been supported by studies using multiple

9

methodologies including emotional chimeric face tasks (Prete, Laeng, Fabri, Foschi,

& Tommasi, 2015), divided visual field tasks (Davidson, Mednick, Moss, Saron, &

Schaffer, 1987), and physiological studies (Baijal & Srinivasan, 2011; Davidson,

1992).

Both the right hemisphere and valence hypotheses cannot simultaneously be correct, but they can both be wrong. The differences between the conclusions could stem from differences in methodologies across the studies. Another source could be found in systematic error that comes as a result of the methods used. If the measures used are unreliable or not properly controlled this may cause issue in the conclusions or the generalizability across samples. The use of multiple methods on the same sample could help to assess the validity and reliability of the measures.

Methodological Considerations

The use of proxy measures for lateralization leaves potential sources of error stemming from the methodologies. Though it is currently unknown if these methodological concerns are warranted, they should be addressed before any further conclusions regarding the lateralization of emotional processing are made. For example, conclusions asserting depressed groups are less lateralized than healthy patients when processing emotions (Jaeger, Borod, & Peselow, 1987) are questionable if we cannot differentiate between an emotion or attentional decisions let alone measure the strength of the lateralization. The same could be said for studies claiming a right hemisphere or valenced hypothesis. Therefore, we must address potential

10

threats to the internal validity and generalizability previously used proxy measures in order to determine if the conclusions of past studies are meaningful.

The inconsistencies among studies using the chimeric face paradigm may be an artifact stemming from the lack of control in the methodology. For example, the often used free-view version of the chimeric face task is reliant on the assumption that participants would fixate only at the center of each face. If this assumption is not met, then the separation of visual fields is lost as one visual field would receive information from both sides of the chimeric face. This is an easy assumption to break.

All it would take is a participant scanning the photo or simply fixating on an eye instead of the center point. Either of these behaviors ruin the visual fields and effectively weakens any inference of lateralization. Correcting this would necessitate controlling the participants fixation and eliminating any possibility of scanning, which can be done with a tachistoscopic version of the chimeric face task.

The assumption that the separation of left and right visual fields begins in the center of the fovea without visual field overlap (i.e., split fovea theory) presents another potential issue. This theory is currently up for debate (Ellis & Brysbaert,

2010). It is possible that there is a bilateral overlap in the fovea up to 3º visual angle across in which both hemispheres are receiving the same information (Bourne, 2006;

Jordan, Paterson, & Stachurski, 2008). This would mean that much of the facial information from chimeric stimuli is falling into this area of overlap, unless presented at an extremely close distance or are fairly large in size. For example, a recent

11

chimeric face study presented images that, at a controlled 56cm distance, only spanned 4.5º of visual angle (Innes et al., 2016). At the upper estimate of bilateral visual field overlap this would mean the majority of the visual information was not presented to separate visual fields at all, and therefore, does not truly assess an underlying lateralization. This problem would be present even in a tachistoscopically controlled version of the chimeric task.

A possible solution to the concerns of visual control and bilateral visual field overlap is the use a divided visual field paradigm. This methodology was first used in a word recognition study by Mishkin and Forgays (1952) and still used today (see

Bourne, 2006 for review on procedures). This paradigm involves the brief presentation of stimuli (e.g., emotional faces) to each visual field separately by ensuring the inner edge of any stimuli was at least 2º to the left or right of a central fixation (i.e., outside of the potential area of overlap). As visual angle is a function of distance to the stimuli, strict control over the distance between the participant and the screen would be necessary. Experimental chinrests (Figure 5) exist for precisely this reason and would have to be used.

12

Figure 5. Experimental chin rest example.

Even with controlled visual field separation and a chin rest, the presentation time of the stimuli must be faster than saccadic reaction time. This reaction time has been found to average around 200ms as long as there is no gap between the fixation and onset of the stimulus (Fischer & Boch, 1983), so stimulus presentation would have to be shorter than this time period.

Additionally, there needs to be some way to ensure participants fixate to the center of the screen before a trial begins. To address this concern, an eye-tracker could be used. In the absence of this technology some form of fixation check could be used to assess whether participants were fixated at the center before each trial began.

13

One proposed method is to embed information into the fixation point and require the participants to report the information (Bourne, 2006). However, care must be taken to ensure that the fixation check does not distract from the actual measure.

Another concern when assessing visual field biases is that the measure is simply assessing an attentional or response bias rather than a lateralized function.

Simply put, this hypothesis states that any and all information presented in the left visual field maintains an attentional advantage. This concern has been addressed by presenting participants with an emotional chimeric face task along with an inverted face identification task meant to interrupted facial processing (Luh, 1998; Luh,

Rueckert, & Levy, 1991). Another study used both the chimeric face task with a greyscales task in which participants had to decide between the darkness of grey bars

(Innes et al., 2016). Each study found the same significant leftward bias in chimeric face and ‘control’ tasks. In other words, participants were biased towards selecting information presented on the left regardless of the content.

The current study will make use of this divided visual field paradigm. It should be noted that this paradigm has repeatedly found right visual field bias (i.e., left lateralized) for the processing of verbal stimuli (Barca et al., 2011; Hagenbeek &

Van Strien, 2002; Mishkin & Forgays, 1952; Orbach, 1967; Pujol, Deus, Losilla, &

Capdevila, 1999). This bias is opposite to the one theorized for the processing of emotion. For this reason, a word recognition task was added in addition to the

14

emotion face recognition task in order to validate the internal validity of the experimental paradigm and procedures.

Overview of the Current Study

The above review of the literature reveals a consensus among researchers that there is some form of hemispheric lateralization when perceiving emotions. That being said, the conclusions have been inconsistent with some studies supporting an overall right hemisphere lateralization and others finding differences in lateralization between positively and negatively valenced emotions. Most of what we know has been inferred through the use of visual field biases as a proxy measure with the assumption that a bias towards one visual field represents greater processing in the opposing hemisphere. However, conclusions regarding the underlying lateralization of processes may be invalid if these visual fields are not properly separated during testing. The problem is that much of the empirical evidence that forms the basis of our knowledge in this area has been accumulating through poorly controlled experiments.

The main objectives of this investigation were to examine the validity of classic free view chimeric face methodology and clarify the discrepancies found across previous works through the use of highly controlled methodologies. Visual field biases were assessed with three completely different research paradigms. The first (Experiment 1) was a direct replication of the commonly used free-view chimeric face task first used by Levy and colleagues (1983). The second (Experiment 2)

15

method refined this technique by using a tachistoscopic presentation of stimuli in which chimeric faces were seen one at a time for a time period too quick to make any saccades. The third (Experiment 3) method employed a divided visual field paradigm that both rigorously controlled stimulus field presentation and enabled the simultaneous presentation of multiple emotional faces and non-emotional words. The examination and comparison of these three methods for assessing visual field bias was expected to confirm the phenomenon by ruling out the artifacts compromising the internal validity of prior published reports.

If a left visual field bias for negative emotions and a right visual field bias for positive emotions appeared, then the valence hypothesis would be a fitting model for emotional perception lateralization. If, on the other hand, a significant left visual field bias is found across positive and negatively valenced emotions, then a right hemisphere lateralization when perceiving emotions would be likely. This latter effect could alternatively be the result of an overall leftward attentional bias. However, a right visual field bias during the word recognition task would eliminate this alternative explanation because it would necessitate responses and towards the opposite visual field. We hypothesized that a left visual field bias would appear in

Experiments 1 and 2 as well as the emotion recognition task of Experiment 3 and a right visual field bias would appear for the word recognition task of Experiment 3.

16

General Methods

Participants

Ninety (63 female) right-handed, undergraduate psychology students from

Oregon State University’s online experiment signup system participated. Ages ranged from 18 to 26 years old (M = 19.3, SD = 1.58). All participants were tested to have at least 20/25 vision, the ability to distinguish basic colors, and the ability to speak and read English. Participant handedness was measured using the Edinburgh Handedness

Inventory (Oldfield, 1971) which scored handedness on a -100 (strong left- handedness) to +100 (strong right-handedness) scale (scale included in Appendix A).

Participant handedness ranged from +33.33 to +100 (M = 81.48, SD = 18.44). All students received partial class credit for their participation and were treated in accordance with the “Ethical Principles of Psychologists and Code of Conduct”

(American Psychological Association, 2002).

A power analysis with 90 participants was completed with G*Power3 (Faul,

Erdfelder, Lang, & Buchner, 2007). Assuming the same large effect size (r = .57) found in Levy et al.’s (1983) initial work, the study was determined to have adequate power to detect a bias effect (β = 1). These 90 participants were also determined to offered adequate power (β = .83) to perform a correlation between measures, even when assuming psychology’s average effect size (r = .30).

17

Apparatus

Stimuli were presented on a Samsung SE200 23.6” monitor (1920 x 1080 resolution, 60hz refresh rate) using OpenSesame experimental presentation software

(Mathôt, Schreij, & Theeuwes, 2012). During the visually controlled tasks, a chin rest was placed exactly 56 cm from the monitor. At this distance each degree of visual angle was represented by 33.35px. All selections were made using a 102-key keyboard with a standard 10-key number pad and top number row in order to allow for participants to make selections with their right or left hand.

Procedure

Upon arriving to the experiment participants were presented with a description of research document (included in Appendix B) and read an overview of the study.

After agreeing to participate, the researcher completed an eligibility screening. Visual acuity for each eye was tested using a Snellen eye chart (Snellen, 1862). Any participants who did not display at least 20/25 vision were excluded from participation. Color blindness was screened through a simple color identification task to ensure participants could correctly identify primary colors.

Participants who passed all eligibility screenings continued to the computer station to complete the main tasks of the study. The experimenter informed the participants on the overall procedure for each separate task and the importance of using the chinrest for the entirety of the sequential chimeric and divided visual field tasks. They were also informed of the importance of setting their eyes on the fixation

18

point in between trials and notified of the inclusion of a fixation check. During the task participants were instructed to make their selections using the number keys on the keyboard. This allowed for the use of the left or right handed responding (choice of response hand was noted).

Following the instructions, participants completed a brief practice sequence.

In this sequence, participants completed four trials in which with a photograph of a smiling or angry face was presented before or after a neutral face for 2000ms each.

Participants were asked to identify which face they found to be more emotionally expressive using the keyboard (‘1’ if they believed the first photo to be more expressive and ‘3’ for the second photo). Each photo was preceded by a color changing fixation point. This fixation appeared as black for 600ms and the changed to a randomly selected color (red, blue, or green) for an additional 600ms. During two trials of the practice sequence the participant completed a fixation check in which they were asked to identify which color the fixation point turned and given feedback based on the accuracy of their response.

After completing practice sequence, participants continued onto the main experiments. Participants completed them in the following order: sequential chimeric face task, emotional divided visual field task, word matching divided visual field task, and free-view chimeric face task. Instructions were presented on the computer monitor before each task (included in Appendix B).

19

Upon completion of the three experiments, participants were allowed a brief break before completing a set of six measures and a short demographic questionnaire via Qualtrics online survey software. These measures included two self-report handedness measures. The two measures of handedness were found to correlate significantly (r = .93, p < .001). Only the Edinburgh Handedness Inventory (Oldfield,

1971) was used as inclusion criteria. In scoring the Edinburgh Handedness Inventory a lack of preference between hands received a score of zero, so the right-handed inclusion criterion was considered any score great than zero. This cutoff resulted in the loss of 11 of the 101 participants who initially enrolled in the study which is in line with the previously found norms of a roughly 90% right handed population

(Hardyck & Petrinovich, 1977).

Participants were also assessed for eye dominance using the Miles Test

(Miles, 1929), footedness using the Waterloo Footedness Questionnaire-Revised

(Elias, Bryden, & Bulman-Fleming, 1998), as well as three measures of emotional identification ability. These measures were not pertinent to the central hypotheses of this study and will not be discussed further. Table 1 includes the completed eligibility measures and lateralization measures. After completing the study, participants were thanked and debriefed (see Appendix B for debriefing statement).

20

Table 1

Measures of eligibility and lateralization

Measure Eligibility Lateralization Visual Acuity X Colorblindness X Handedness X X

Visual Field Bias X Footedness X Eye Dominance X Response Hand X Note: Visual acuity measured using a Snellen eye chart (Snellen, 1862), colorblindness assessed using a color recognition task, handedness assessed using the Edinburgh Handedness Inventory (Oldfield, 1971) and Waterloo Handedness

Questionnaire-Revised (Boucher, Bryden, & Roy,1996), visual field bias assessed using chimeric face and divided visual field tasks, footedness assessed using the

Waterloo Footedness Questionnaire (Elias, Bryden, & Bulman-Flemming, 1998), eye dominance assessed using the Miles Test (Miles, 1929), and response hand was recorded by the experimenter as the hand used to make keyboard responses during the main task.

21

Experiment 1: Free-View Chimeric Face Task1

In this task, an emotional chimeric face appeared with its mirrored counterpart in a vertical column (as opposed to side-by-side) on a display. Participants were given time to study both images before making a forced choice as to which face they found more emotionally expressive.

Method

Free-view chimeric face task. Frontal headshots of five white males displaying a neutral expression, open-mouthed happy expression, and angry expression were selected from the Chicago Face Database (Ma, Correll, &

Wittenbrink, 2015). From these photos, multiple sets of chimeric faces were created for each model using Adobe Photoshop. This was done by first vertically bisecting each facial photograph at the nose, then combining each half of a model’s neutral photo with the opposing side of their happy photo (i.e., the neutral left hemi-face was attached to the happy right hemi-face and the right neutral hemi-face was attached to the left happy hemi-face). As differences in head tilt would generate inconsistent chimeric faces (e.g., chins are foreheads would not line up), all photo selections were based upon the overall consistency of head tilt between the three photos depicting different emotions. The midline of each photo was then blended to increase the natural appearance of the chimeric faces.

1 Although it is being presented first, this task was completed last in the overall procedure. This was done as a caution in case the longer viewing time allowed participants to notice their biases when viewing faces and because it necessitated the least amount of control.

22

Each newly created chimeric face was then mirrored to control for the variation in expressiveness between each model’s left and right hemiface. This process was repeated using the angry photographs in place of the happy creating a total of eight chimeric faces (i.e., two happy chimeric, two mirrored happy chimeric, two angry chimeric, and two mirrored angry chimeric) for each model and 40 images total. Finally, all photos were cropped across the neck, just below the chin and edited to a size of approximately 240 x 350 pixels. A full set of edited photos can be seen in

Figure 6.

23

Figure 6. Facial photos and edited chimeric displays.

Note: Right Visual Field (RVF) and Left Visual Field (LVF) refer to the visual fields of the perceiver.

24

Participants viewed a chimeric face as well as its mirrored counterpart arranged in a centered, vertical column separated by approximately 1.5º of visual angle (Figure 7). Participants were instructed to select which of the two photos they believed was more emotionally expressive (i.e., happier or angrier) using the keyboard (‘1’ to select the upper photo and ‘3’ to select the lower photo). They were permitted to take as long as they wanted to view the photos and make their decisions as accepted by Levy et al. (1983), however, response times were fairly short with an average of 1.45s (SD = 0.55 s) and a maximum response time of 3.76s.

Figure 7: Free view chimeric face task (Experiment 1) response screen.

25

Each participant completed 40 randomized trials with the valence (negative vs positive) and placement (left-emotive hemiface on top vs bottom) counterbalanced.

This counterbalancing ensured that a participant blindly selecting one photo be equally likely to choose a left or right emotive and thus show no bias at all.

A bias quotient was calculated for each participant based on which type of chimeric face they tended to see as more expressive using a calculation created by

Levy et al. (1983). The total number of selections in which the emotional half (happy or angry) of a chimeric face was in the participant’s left visual field (LVF; left of the center of the face) was subtracted from the number of right visual field (RVF) selections. This total was then divided by their total number of decisions. The formula can be expressed as (example scoring table can be found in Table 2):

NRVF Emotive – NLVF Emotive NRVF Emotive + NLVF Emotive

According to this scoring scheme, a score of -1 represented the strongest possible left visual field bias (i.e., the participant believed all faces with an emotive left side were more expressive) and a score of +1 the strongest possible right visual field bias (i.e., the participant believed all faces with an emotive right side were more expressive).

An alternative interpretation of the bias quotient is a signed difference score between the proportion of LVF emotive selections and RVF emotive selections

26

Table 2

Bias quotient scoring example

27

Results

Internal consistency of the free-view chimeric face task was assessed using

Cronbach’s alpha. For this, each particular variation of stimuli pairs was treated as an item (e.g., the variation presented as trial 1 in Table 2 was treated as one item and the reversed presentation was treated as a second item). This was independent of the trial in which the they appeared as the order was randomized. Each trial in which participants showed a left visual field bias was treated as a “correct” response. The overall reliability of the task was high (α = .91).

Overall, faces in which the emotive side landed in the participants left visual field were selected as more emotive 64.6% of the time. This results in an average left visual field bias of -0.29 (SD = 0.44). A one sample t-test found this to be a significant bias among the sample (t(89) = -6.25, p < .001, r = .55). This bias was similar to the biases found among right-handers in Levy et al.'s (1983) initial study in which the researchers found an average bias of -.303 (t(111) = 7.26, r = .57).

In order to test for the potential effects of emotional valence and location of the photos, a 2 (valence: positive vs negative) x 2 (location: above vs below fixation) repeated measures analysis of variance (ANOVA) was completed. The intercept of the model suggested an overall bias significantly different from zero (F(1,89) =

39.02, p < .001, r = .55). As this is comparing the grand mean to zero, it was identical in size to the effect calculated in the t-test analysis reported above. With respect to

28

emotional valence, there was no significant main effect found on the bias differences between positively and negatively valenced chimeric faces (F(1,89) = 0.04, p = .85).

An unanticipated methodological effect was observed. There was a significant main effect of chimeric face location (i.e., top versus bottom) on the participants’ biases (F(1,89) = 16.21, p < .001, r = .39). When the upper chimeric face had an emotional left side (from the participants perspective), the average bias was -0.38 (SD

= 0.47). However, if this left emotive face was placed in the lower position the average bias was -0.20 (SD = 0.46). No significant interaction between valence and placement of the faces existed (F(1,89) = 3.12, p = 0.08). See Figure 8 for a plot of visual field biases.

29

Figure 8: Free-viewing chimeric bias quotient by order and valence

Note: Orientation rotated so that leftward biases appear to the left of center. Negative valence (Neg) includes chimeric faces with an /neutral mix. Positive valence

(Pos) includes happy/neutral chimeric faces. Error bars display 1 standard error.

30

Discussion

As in previous studies (Ashwin, Wheelwright, & Baron-Cohen, 2005; Bourne,

2005, 2008, 2010; Bourne & Gray, 2011; Campbell, 1978; Failla, Sheppard, &

Bradshaw, 2003; Gupta & Pandey, 2010; Heller & Levy, 1981; Hoptman & Levy,

1988; Innes et al., 2016; Jaeger et al., 1987; Levy et al., 1983; Ley & Bryden, 1979;

Luh, 1998; Moreno, Borod, Welkowitz, & Alpert, 1990; Wedding & Cyrus, 1986), we found a leftward visual bias when perceiving emotional faces. This was independent of emotional valence as there was no difference between biases for happy/neutral and angry/neutral chimeric faces. This would lend support towards a right-hemisphere specialization for emotional processing as this hemisphere is responsible for processing information received from the left visual field.

It is important to note that the free-viewing chimeric face task relies on the assumption that participants will focus solely on the center of each face without any scanning. Any deviation from this assumption would lead to a lack of separation between visual fields and allow for an attentional explanation for this bias. With average response time of 1.45s, participants had plenty of time to scan the images.

Experiment 2 attempted to correct this experimental design weakness through the use of a tachistoscopic version of the chimeric face task. This would ensure the chimeric faces were presented for a period too brief for visual scanning.

31

Experiment 2: Sequential Chimeric Face Task

Method

Sequential chimeric face task. An additional set of chimeric faces was created using the same process as in Experiment 1 with ten new white males. This resulted in 80 (40 happy) new chimeric faces. In order to ensure the stimuli spanned both visual fields and did not have a large amount of information in the fovea, the photos were edited to a size of 680 x 975 pixels (approximately 20º of visual angle across).

During this version of the task, each chimeric face was paired with its mirrored counterpart and presented one at a time in the center of the screen. Each presentation was preceded by a color changing fixation point for 1200ms in order to center the participant’s gaze to the center of the screen. Each chimeric stimulus was shown for only 180ms in order to ensure that participants did not have time to make any saccadic movements during the presentation. A backward mask of gaussian noise followed each stimulus for 300ms in order to relieve any afterimage. As in

Experiment 1, participants were asked to select the photo they believed to be more emotionally expressive using the keyboard (‘1’ to select the first photo and ‘3’ to select the second photo) after viewing each chimeric pair (see Figure 9 for stimulus array). Valence of emotions (negative vs positive) and order of presentation (left emotive face first vs second) were counterbalanced across 80 trials. A bias quotient

32

was calculated using the same method as in Experiment 1 with a bias of -1 signifying a complete left visual field bias and +1 a complete right visual field bias.

33

2). xperiment xperiment E Stimulus array for sequential chimeric face task ( task face Stimulus chimeric for array sequential

Figure Figure 9.

34

Fixation-check. A simple fixation-check was created for this study in order to ensure participants were maintaining attention to the fixation point before the appearance of any stimuli. A color changing fixation appeared for 1200ms before each trial. This fixation appeared as a black fixation dot for 600ms and then changed to a randomly selected primary visual color (i.e., red, blue, or green) for an additional

600ms. Participants would then be asked to simply identify the color of the fixation that appeared. Failure to correctly respond to these checks signified an attentional lapses or saccade away from the fixation before the stimulus appeared. Eleven of these fixation check trials (approximately 12% of trials) randomly appeared in place of a chimeric trial.

An accuracy of at least 90% was used as a cutoff score for inclusion in analyses. Seven participants failed to meet this cutoff score for the fixation check and were excluded from analyses leaving a total of 83 participants.

Results

Similar to Experiment 1, there was an overall left visual field bias. Chimeric photos with a left emotive face were selected 57.5% of time resulting in a significant bias quotient of -0.15 (F(1,82) = 15.06, p < .001, r = .39). A paired sample t-test showed the bias to be significantly smaller (t(82) = 4.39, p < .001, r = .44) than the bias found in Experiment 1.

Internal consistency was assessed using the same methods as described in

Experiment 1. This sequential version of the chimeric face task was also found to be

35

very reliable (α = .88). There was also a high degree of reliability in participant biases between both forms of the task. The biases in Experiment 2 correlated to a high degree with those found in Experiment 1 (r(81) = .68, p < .001).

A 2 (valence: positive vs negative) x 2 (order: left emotive face first vs second) repeated measures ANOVA was conducted on participants’ visual field bias quotients. As in Experiment 1, there was no main effect of emotional valence on participants’ bias quotients (F(1,82) = 0.49, p = .49). There was, however, a small main effect of order on the overall biases (F(1,82) = 4.10, p < .05, r = .22) such that a stronger left visual field bias was observed when the first photo was a left-emotive chimeric face (M = -0.19, SD = 0.46) than when the second chimeric display had a left-emotive face (M = -0.07, SD = 0.38).

An unexpected interaction between valence and order was observed (F(1,82)

= 14.89, p < .001, r = .39). For negatively valenced emotions there was little difference between first (M = -0.15, SD = 0.45) and second (M = -0.13, SD = 0.39) presentations. However, positively valenced emotions had a strong primacy bias (M =

-0.23, SD = 0.54) and no bias when the left-emotive chimeric face appeared second

(M = 0.00, SD = 0.44). In other words, when the first face presented had a smile expressed on the left side as compared to neutral on the left it was likely to be selected as more expressive. However, when the left side smile was the second picture to appear there was no bias at it was equally likely to be considered the more or less expressive photo.

36

Discussion

The tachistoscopic presentation of chimeric faces in Experiment 2 allowed for a more controlled separation of visual field as it stopped participants from scanning the stimuli. This was useful in ensuring that a response bias was truly due to visual field differences and, by extension, hemispheric differences. As in Experiment 1, there was an overall leftward visual field bias when assessing the emotional intensity of chimeric faces. The extent of the bias was to a lesser degree as compared to

Experiment 1, but it was consistent across both positively and negatively valenced stimuli. This, again, lent support to the theorized right hemisphere specialization in processing emotion.

Experiment 3: Divided Visual Field

Despite the consistent results of Experiments 1 and 2, it was important to ensure that the leftward biases found in these chimeric face tasks were not specific to this one methodology. The chimeric face task relies on the use of edited images that may activate an atypical processing due to their visible midline (Burt & Perrett,

1997). There is also the debated issue that information in the very center of one’s visual field projects bilaterally (Bourne, 2006; Ellis & Brysbaert, 2010; Jordan,

Patching, & Milner, 2000). This could mean that central information being used in judging the chimeric faces is not projecting to separate hemispheres thus lateralization could not be assessed. We attempted to address both of these issues by smoothing the midline between the faces and presenting them at a larger size to

37

minimize information in the fovea, however the faces do still appear unnatural and we cannot be certain the overlapping information is not being used. For this reason, another test of visual field bias completed.

The divided visual field paradigm briefly presents stimuli outside of the center of the visual field to one or both visual fields. This would alleviate the potential overlapping visual field issue and allow for the use of natural face photos rather than chimeric. It has also been found to be quite dependable in assessing heavily researched areas like verbal abilities. Research using a divided visual field paradigm with verbal stimuli consistently find an increase in performance when word are presented to the right visual field (Barca et al., 2011; Geffen et al., 1971; Hagenbeek

& Van Strien, 2002; Jordan et al., 2000, 2008; Mishkin & Forgays, 1952; Orbach,

1967; Tremblay et al., 2007). This is consistent with the left hemisphere lateralization of verbal abilities.

Experiment 3 used the divided visual field paradigm to examine two tasks: (a) word recognition task and (b) an emotional face perception task. A right visual field bias was predicted for the word recognition task whereas a left visual field bias was predicted for the emotional face perception task2.

2 The verbal task was presented separately and after the emotional face task. This was done to avoid the risk of the well-established verbal processing effect contaminating the assessment of the emotional face processing task.

38

Method

Word recognition task. The word recognition task was a consisted of 20 sets of four and five letter words selected from a database of the 5000 most common

English words (Mark Davies & Dee Gardner, 2010). Each set of words consisted of non-emotional nouns that were grouped to appear similar at first glance (e.g., the first letter of each word and the relative heights of the center letters were kept as consistent as possible). For example, a set may contain: desk, dark, deck, and duck

(the full list of word sets can be found in Appendix A). This ensured any potential emotional activation from the words was minimized and to increase the difficulty of the task so that participants were not simply noticing uniquely shaped letters (e.g., a word containing an ‘l’ and a set of low letters such as ‘a’,‘c’,‘s’, or ’o’).

Two of the four words in each set were selected as the “target” and “lure” stimuli. These were presented in size 72 black font, to both visual fields, approximately 4.5º from the participants’ central fixation, and for only a brief

(180ms) duration. As in Experiment 2, a color changing fixation preceded these stimuli for 1200ms in order to center participants’ fixation. After the presentation of the initial stimuli, 300ms masks appeared in the areas where the words previously appeared.

Next, a set of three words consisting of the previously presented target word as well as two previously unseen “distracter” words from the set appeared.

Participants, were asked to select which word from a set of three, if any, previously

39

appeared using the ‘1’, ‘2’, ‘3’, and ‘4’ keys. The last key referred to an option for

“None of the above”, however, in every case the correct answer was present in the set of three. A full stimulus array is presented in Figure 10.

One round of trials was completed for each of the 20 word-sets. A second round of trials was created using the “distractor” stimuli from the first round as the

“target” and “lure” stimuli and vice-versa. This resulted in a total of 40 trials from the

20 sets of words. Trials were counterbalanced so the target and lure words appeared to the left and right of the central fixation an equal number of times. Each trial in which the participant correctly recognized a previously presented word was considered a

“hit.” From these, separate accuracy scores were calculated for target words that appeared to the left or right of the central fixation (i.e., words appearing in the left or right visual fields).

40

xperiment 3). xperiment E Stimulus array for divided visual field word recognition task word ( task recognition Stimulus visual field for array divided

Figure Figure 10.

xperiment 3). xperiment E Stimulus array for divided visual field word recognition task word ( task recognition Stimulus visual field for array divided

Figure Figure 10. 41

Emotion recognition task. For this task, frontal headshots of 32 models were selected from the Chicago Face Database (Ma et al., 2015). In order to increase the generalizability of the stimuli, we included 16 males and 16 females with an equal proportion of white and black models. Selections were made based upon the range of emotional expressions captured for each model. It was necessary for each model to have a set of photographs in which they displayed open-mouthed happy, fearful, angry, and neutral expressions. For the white males any models that were used for the chimeric face task were excluded. Each photo was reduced to a size of 476 x 515 pixels.

The emotion recognition task was similar in design to the word recognition task but with emotional faces rather than words. Using each model’s set of four expressions, a pair of “target” and “lure” stimuli were selected. This pair appeared simultaneously for 180ms in the participants’ left and right visual fields. The inner edge of each photo was approximately 2.75º of visual angle to either side of the fixation point. However, as the photos had some white space around the faces, the effective distance of emotional information was approximately 5º of visual angle to either side of the central fixation. The same 1200ms color changing fixation and

300ms masks used in the word matching task accompanied these stimuli. After the mask, three emotion photos appeared. These emotional photos consisted of the same model expressing two unused emotions as distractors and the previously presented target stimulus. Similar to the word recognition task, participants were asked to report

42

which emotional photo, if any, they had previously seen using the ‘1’, ‘2’, and ‘3’ and

‘4’ keys on the keyboard. Again, the option of ‘none of the above’ was included but never correct.

Like with the word recognition task, the two distractor photos for each trial was used to create an additional set of target stimuli for a total of 64 emotion recognition trials (see Figure 11 for full stimulus array). The visual field and emotion contained in the ‘target’ stimuli were counterbalanced. Recognition accuracy scores were calculated separately for targets that appeared on the left and those that appeared on the right as well as for each target emotion (happy, angry, fearful, and neutral).

A counterbalancing issue appeared during the analysis of the emotion matching task.3 As a result, only 30 of the original 64 trials could be analyzed. Half of the trials contained targets that appeared in the left visual field and half in the right with an equal occurrence of emotional (i.e., , , and anger) stimuli.

3 Although anger and fear emotions were presented as target stimuli an equal number of times in the left and right visual fields, neutral targets only appeared in the left visual field and happy targets appeared more often in the right visual field (16 times vs 5 times). The lack of neutral targets appearing in the right visual field had the potential to skew results in the analyses, so these trials were treated as filler and not analyzed. For the remaining emotions (happy, angry, fearful) we decided to balance the design using the lowest common number. As happy targets only appeared on the left five times this was used as the cutoff. For the remaining emotions and visual fields, only the first five instances were used in the recognition accuracy calculations. Subsequent trials for each emotion were dropped. An exploratory analysis was completed using all trials other than the neutral trails. This left 52 trials total (20 in with target stimuli in the left visual field) with happy targets appearing three times more often in the right visual field. Accuracy scores for each emotion on each side were completed as normal. Results indicated the same main effects and interactions effects as those reported.

43

xperiment 3). xperiment E Stimulus array for divided visual field emotion recognition task ( task recognition Stimulusemotion visual field for array divided

re 11. re Figu

xperiment 3). xperiment E Stimulus array for divided visual field emotion recognition task ( task recognition Stimulusemotion visual field for array divided

Figure Figure 11. 44

Fixation-check. The same fixation check used in Experiment 2 was used here. There were five checks completed during the word matching task and nine during the emotion matching task. In order to have a balanced comparison group, only participants that achieved greater than 90% accuracy during both tasks were included in the analyses. This inclusion criteria proved difficult for participants.

After excluding participants who did not achieve a cutoff score in both tasks, only 57 participants remained. Assuming the same large effect size of bias found in the chimeric tasks of Experiments 1 and 2 (r = .55, r = .39 respectively), the analyses were still robustly powered with 57 participants (β = 1).

Results

The nature of the divided visual field paradigm allowed for the calculation of separate recognition scores for stimuli presented to the left and right visual fields.

These scores were based upon the percentage of correct trials in which a target stimulus was correctly recognized as a previously presented stimulus. Using these recognition accuracy scores, a 2x2 (task: word recognition vs emotion recognition; visual field: left vs right) repeated measures ANOVA was conducted. A participant’s bias in this case would manifest as a significant difference between recognition accuracy in the left and right visual fields with a greater accuracy signifying a bias towards a particular side. In this particular case, emotional recognition should have been higher for faces presented to the left visual field and verbal recognition should have been higher for stimuli presented to the right visual field.

45

The overall accuracy across both the verbal and facial recognition tasks was

60.1% (SD = 9.41%). There was a significant (F(1,56) = 5.75, p < .05, r = .31 ) difference in overall accuracy between the word recognition (Meanaccuracy = 57.81%) and emotion recognition tasks (Meanaccuracy = 62.34 %). Both tasks showed an increase in recognition performance for targets in the right visual field (see Figure 12 for a display of means). Although this was expected for the verbal task, it was unexpected for the word recognition task.

Figure 12: Divided visual field task accuracy between visual fields for emotional face and word recognition tasks. Note: Error bars display 1 standard error.

46

A right visual field bias was significant across both tasks (F(1, 56) = 24.52, p

< .001, r = .55). The average accuracy for targets appearing in the right visual field was 68.73% (SD = 13.9%) and significantly higher than the accuracy for targets appearing in the left visual field (51.42%, SD = 18.22%). As expected, the interaction between the tasks and visual field proved to be significant as well. The right visual field bias was significantly more pronounced for verbal processing than it was for the emotional face processing, for which we expected a left visual field bias (F(1,56) =

13.63, p < .001, r = .44).

Given the unexpected bias direction for emotional perception, a second analysis was conducted on the data from only the emotion recognition task. A 2x3

(visual field: left vs right; emotion: happy, angry, fearful) repeated measures ANOVA examined the possible effects of emotion type on recognition accuracy in each visual field. Contrary to the bias observed in Experiments 1 and 2, accuracy for targets in the left visual field (M = 57.66%, SD = 20.1%) was lower than the accuracy in the right visual field (M = 67.02%, SD = 17.03%). The main effect of visual field was significant (F(1, 56) = 5.27, p < .05, r = .29).

2 A main effect of emotion was found (F(2,112) = 33.87, p < .001, η g = 0.14 involving the three emotions (i.e., anger, happiness, and fear). Participants were most accurate in recognizing fear (M = 76.49%, SD = 16.31%) and least accurate recognizing happiness (M = 51.05%, SD = 17.8%) with anger falling in the middle (M

= 59.47%, SD = 18.26%). Despite this different in overall accuracy between

47

emotions, the right visual field bias was consistent. The interaction between the visual field and emotion was not significant (F(2,112) = 0.84, p = .44).

Discussion

The left hemisphere lateralization of verbal processing resulting in a right visual field bias is a, stable, well-supported effect. This expected effect was found in our divided visual field task for non-emotive words. Its consistency in our sample confirmed the validity of the methods and procedures employed. However, the results for the emotional recognition task were completely unexpected. The participants that had just displayed a left visual field bias when processing emotional faces in the chimeric face tasks of Experiments 1 and 2, instead displayed a right visual field bias.

This would suggest lateralization to opposite hemispheres.

A potential explanation for this unexpected difference appeared upon consideration of the tasks themselves. In the chimeric face tasks, participants were asked to assess the intensity of emotional faces, yet the divided visual field study had participants recognize a previously shown emotion. It is possible that this intensity judgment is a simple automatic process used to assess the dimension of an emotion. Recognizing an emotion may be a more sophisticated process that necessitates the use of verbal labeling and thus was lateralized to the left hemisphere.

Although both tasks necessitate processing emotional information from a face, it is possible that the two tasks are fundamentally distinct.

48

General Discussion

Previous research regarding the lateralization of emotional face perception has generated questionable conclusions about right hemisphere specialization and the relevance of emotional valences (see Borod, Koff, Yecker, Santschi, & Schmidt,

1998; Mandal & Ambady, 2004 for reviews). Although it is possible to directly measure lateralization, the use of visual field bias as a proxy measure has long been used successfully in split-brain (Gazzaniga & Sperry, 1967; Gazzaniga, 1967; Levy et al., 1972) and normal participants (Campbell, 1978; Gilbert & Bakan, 1973; Mishkin

& Forgays, 1952). This is a theoretically sound methodology but introduces potential artifacts which may contribute to the inconsistent conclusions of past studies on emotional perception lateralization.

Chimeric Tests of Lateralization

The primary purposes of this study were to assess the validity of the uncontrolled free-view chimeric face task and determine whether the valence of right hemisphere hypotheses were better explanations for the lateralization of emotional perception. A major concern with this free-viewing task was the lack of control for the participants’ scanning as this had the potential to break the separation between visual fields and, ultimately, hemispheres. Experiment 2 presented the chimeric face task in a controlled form in which participants’ focus was centered and they did not have the ability to visually scan. When this controlled version was compared to the uncontrolled free-view version of the chimeric face task, participants’ biases

49

correlated to a high degree (r = .68, p < .001). This suggested that even the uncontrolled free-view chimeric face task could be an adequate measure.

The fact that chimeric tasks strongly correlated and both included positively valenced (happy) and negatively valenced (angry) chimeric faces, allowed us to confidently assess the lateralization of emotional perception. As hypothesized, both tasks uncovered an overall left visual field bias comparable in magnitude to other studies employing these methods (Ashwin et al., 2005; Bourne, 2005; Campbell,

1978; Innes et al., 2016; Levy et al., 1983). There was no trend towards a right visual field bias for positively valenced emotions, as would be expected by the valence hypothesis. Furthermore, there was no difference in magnitude of the biases between valences. Given these results, the valence hypothesis does not seem plausible.

Instead, these data suggest the perception of emotion is lateralized to the right hemisphere.

It was possible that the leftward biases found were not due to a right hemisphere lateralization, but rather some other attentional or response bias. This alternative explanation could not be addressed with the chimeric face tasks alone. For this reason, Experiment 3 included a word recognition task to confirm a well- established right visual field bias for processing verbal stimuli before employing that methodology to assess the visual field bias with emotional stimuli. As hypothesized, participants had better recognition accuracy for words that appeared in their right

50

visual fields. This worked to eliminate the attentional explanation for the leftward biases observed in the chimeric face tasks.

Order Effects in the Chimeric Face Task

The design and analysis of the chimeric tests of Experiments 1 and 2 allowed for the testing of location and order effects when judging emotional intensity. In

Experiment 1 both chimeric faces were presented simultaneously in a vertical column. We found a significant effect of image location with stronger visual field biases appearing for chimeric face in the upper location. Although directly testing location, this may be better described as an order effect as people from many different cultures have a strong tendency to scan information from top to bottom (Farough

Abed, 1991). With this in mind it could be argued that this increased bias for stimuli in the upper location is a primacy effect. However, this is another area in which the free-view chimeric task introduced potential artifacts as there was no guarantee that participants first viewed the top photo, nor was there a way to control the number of times they looked back and forth between the photos. As Experiment 2 presented stimuli one after the other sequence, it was better able to address this issue.

Experiment 2 also found a significant primacy effect, albeit with an interaction. The presentation order (i.e., left-emotive chimeric face presented before its mirrored counterpart or after) did not matter for negatively valenced emotions.

However, positive emotions generated a large primacy effect. The left visual field bias was prominent when participants were presented with a left-happy face followed

51

by a left-neutral face but disappeared completely if the left-happy face appeared after the left neutral. If this positive intensity decrease is interpreted as a movement towards negative emotions, then these effects could be interpreted as a sensitivity to negatively valenced information.

Such a pattern of results could be explained by the attention capturing power of negative emotional information (Eastwood, Smilek, & Merikle, 2001, 2003).

Results from these studies concluded that participants were very quick to identify negative emotional information embedded within other stimuli. However, this attention capture become a distraction when the task was not related to the negative information. When angry chimeric faces were presented this could mean that any anger in the left visual field stood out, regardless of order. However, with the happy chimeric faces a neutral left hemiface may be perceived as more negative and thus hold attention when presented first and distract the participant from the following, happy, face.

In a face to face interactions this sensitivity to negative emotions may have interesting implications. Emotional expressions are not static in real contexts, but rather they are dynamic and constantly shifting. There is no going back to consider the expression they had a half second ago as you could when viewing pictures. In a dynamic context this sensitivity to negative emotions could means we are more likely to notice a negative expression come or go and to notice a smile fade, but be less likely to notice a quick smirk appearing before us (increasing emotional valence).

52

Even with longer lasting , Experiment 3 showed us that we are much more accurate in recognizing negative emotions (e.g., fear and anger) and compared to positive (e.g., happy) so our attention will contently be captured if another person displays these which could be for the better.

Differential Lateralization in Emotion Perception

Results of the emotional recognition portion of Experiment 3 indicated a surprising contrast to the chimeric face tasks. The divided visual field task results indicated that emotional recognition performance was better for faces in the right visual field (i.e., lateralized to the left hemisphere). At first glance this appeared to add more inconsistencies to subject of emotional perception lateralization. However, evaluation of the participants’ task in the chimeric and divided visual field experiments themselves uncovered a major difference. The chimeric face task required participants assess the intensity of an emotion, whereas the divided visual field task required the recognition of a previously shown emotion. It is possible that this latter process, while still “emotional perception”, is fundamentally different.

The difference between the processes involved in recognition and recall and those involved in estimating intensity could be useful in explaining the differences that appear between measures. For example, the MSCEIT

(Mayer, Salovey, & Caruso, 2002) has a subscale in which participants are asked to make Likert scale judgments regarding the emotional content of a face. However, abilities to accurately perform this task do not correlate with scores on an emotional

53

recognition task (Bernieri & Brown, 2019; Mayer, Salovey, & Caruso, 2012). In other words, the ability to assess how “sad” a face is does not necessarily make them any better at recognizing an emotion as “” as opposed to “fear.” If judging intensity and recognizing emotions are truly different processes, then it is essential for emotional and nonverbal researchers to reasses commonly used measures of emotional intelligence and determine what they are actually measuring.

A potential explanation for the difference between emotional intensity and recognition abilities is that emotion recognition requires verbal abilities to label the emotion. This idea has been proposed in the past to explain variation in abilities across cultures and has been supported in other contexts (Roberson, Damjanovic, &

Kikutani, 2010). For example, it has been found that emotion recognition abilities are compromised if verbal processing is overwhelmed (Lindquist, Barrett, Bliss-Moreau,

& Russell, 2006). Additionally, this could explain the moderating effects verbal intelligence on emotion recognition abilities that has been previously observed

(Montebarocci, Surcinelli, Rossi, & Baldaro, 2011) and the reason why a factor analysis found an emotion recognition task more closely tied to general intelligence measures than to emotional intelligence measures like the MSCEIT (Roberts et al.,

2006). Until we further clear up these distinctions, researchers must be vigilant of the differences between what they are trying to assess in their studies and what they are actually measuring.

54

Limitations

There are many important factors that must be considered when designing experiments such as the ones presented here. It may seem overly cautious at first, but it is important to remember that these experiments are attempting to assess a neurological phenomenon through behavioral measures. These methods are far cheaper and more accessible for researchers than alternatives like neuroimaging, but even a small amount of error in an indirect measurement can lead to a large miscalculation of the measurement of interest. Therefore, researchers should attempt to reduce every source of error that they are in control of.

The visual control and presentation time could be improved upon. The sequential chimeric task and the divided visual field tasks both used a presentation time of 180ms. Although this was below the reported 200ms average time necessary to react and to make a saccade (Fischer & Boch, 1983), some evidence suggests that this limit may be too long (Gezeck, Fischer, & Timmer, 1997). Additionally, participants may have gotten a general feel for the length of the fixation points and began moving their eyes after the colored fixation point appeared but before the presentation of stimuli. This would still allow them to pass the fixation checks and be included in analysis. Future studies using these methods should consider shortening the presentation time and using a variable, rather than fixed, time for the fixation point. Alternatively, an eye-tracker could be used to assess trials in which saccades were made of fixation was not centered before the stimulus presentation. This could

55

also help with the sizeable loss of participants in Experiment 3 as trials can simply fail to start until the gaze is centered.

Another potential issue stemmed from the potential differences in the attention capture of the stimuli. Higher luminance has been found to capture a greater deal of spatial attention (Johannes, Münte, Heinze, & Mangun, 1995), so it has been recommended as a factor to control in the creation of divided visual field stimuli

(Bourne, 2006). Within the tasks, the luminance between the emotional and neutral hemifaces and between emotions was not controlled for. It is possible that different emotions have varying levels of luminance due to factors such as the presence of teeth. This is especially important during the divided visual field task for emotion as it did not have the same mirrored comparison the chimeric face tasks had and was not carefully counterbalanced between emotion and visual field. Although it is not clear how much of a difference in luminance is necessary for this to greatly capture covert attention, it is nonetheless a factor that is fairly simple to control in editing software and should be considered.

Yet another issue is that the paradigm type may have been confounded with the behavioral task between the chimeric and divided visual field experiments. In order to better assess the conclusions regarding differences in processing emotional intensity and recognizing emotions, alternate tasks should be used with each paradigm. For example, chimeric faces mixing two different emotions as opposed to an emotional and neutral face can be created. The task would then ask participants to

56

report the emotion they were presented with. Conversely, a divided visual field experiment that only presented an emotional face and a neutral face in alternating visual fields can be created and task participants with simply reporting whether the left or right face was more emotional. This would verify the differences between these two potentially distinct processes regardless of paradigm used.

A final issue concerns the generalizability of the empirical work in this area.

Most research involving cerebral lateralization excludes non-right handed participants as they often display atypical results (Bryden, 1965; Hoptman & Levy, 1988; Pujol et al., 1999; Szaflarski et al., 2002; Tremblay et al., 2007). This non-right handed population makes up approximately 10% of the overall population (Hardyck &

Petrinovich, 1977). This may seem small as an overall percentage, but in the real world this leaves a large gap in the literature, as we cannot generalize the results.

Future Directions

Future research in emotional perception lateralization can follow multiple pathways. The first involves studying groups with atypical lateralization. A natural starting point for this would be to assess a larger sample of non-right handed participants as we know so little about this population. Non-right handed groups have been found to have much more variance in lateralization in studies of verbal processing (Bryden, 1965; Pujol, Deus, Losilla, & Capdevila, 1999; Szaflarski et al.,

2002; Tremblay et al., 2007), facial expression (Borod, Caron, & Koff, 1981; Borod et al., 1998; Heller & Levy, 1981; Keles, Diyarbakirli, Tan, & Tan, 1997; White,

57

1969), and facial perception (Bryden, 1965; Heller & Levy, 1981; Hoptman & Levy,

1988; Levy et al., 1983). However, this group only makes up 10% of the general population (Hardyck & Petrinovich, 1977) making recruitment a challenge.

Additionally, it is possible that handedness is not the best predictor of atypical lateralization. For example, some researchers have suggested other measures such as footedness or eye-dominance as better predictors of atypical lateralization (Borod et al., 1981; Elias et al., 1998). In any case, a step towards developing a better understanding of lateralization is to understanding the characteristics of those who are exceptions to the rule.

The different biases found for emotional perception between chimeric and divided visual field tasks was an interesting surprise. We hypothesized that these results were due to two separate processes with differential lateralization (i.e., intensity judgments and emotion recognition) involved in the perception of emotion.

However, these results could have also been caused by the differences in methodology. For this reason, additional research needs to be completed to methodically determine that the differences found were not simply an artifact of task type. A simple method of doing so could be to choose one methodology (e.g., chimeric face or divided visual field) and modifying it to assess both intensity and recognition judgments.

If perceiving emotional intensity is, in fact, a separate process from emotion recognition, then the potential link between recognition and verbal abilities could be

58

further explored. Additional experiments could be designed to interrupt verbal processing when perceiving emotions to see if it significantly inhibits one’s ability to recognize and categorize an emotional expression. If a link was found it could lead to additional support for the Emotional Sapir-Whorf hypothesis first modeled by

Perlovsky (2009). Furthermore, there could be interesting implications regarding our ability to properly assess another’s reactions while actively engaging in a conversation.

Finally, it is our for this research to extend beyond photographs in the lab to discover any potential implications for face-to-face interactions. For instance, a researcher may attempt to assess the relationship between one’s perceptual bias (e.g., an extreme bias, lack of bias, or reversed bias) and overall abilities in emotion recognition. Research on emotional expression has found that our expressions are asymmetrical with controlled emotions being expressed more intensely on the left half of the face (Borod, Haywood, & Koff, 1997). Therefore, the usual left visual field bias in perceiving emotions could mean we are not properly perceiving other’s emotions to their full extent. This could then lead to misunderstandings and improper attributions of another’s all stemming from a simple difference in visual field bias.

59

Conclusion

The existing literature regarding lateralization when perceiving emotional faces has been inconsistent and inconclusive. Whereas some studies concluded a right hemisphere dominance in perceiving emotions (i.e., right hemisphere hypothesis), others studies have found right hemisphere dominance for only negatively valenced emotions and left hemisphere dominance for positive emotions (i.e., valence hypothesis). The current study attempted to resolve these discrepancies and assess the validity of previous methods. In doing so, we employed a commonly used task (i.e., free-view emotional chimeric face task) as well as two additional, highly controlled, tasks (i.e., tachistoscopic/sequential chimeric face tasks and divided visual field tasks) in an attempt to address multiple methodological concerns (e.g., lack of visual field control, attentional discrimination, the potential for visual field overlap, and participant handedness).

Both the free-view (Experiment 1) and sequential (Experiment 2) chimeric face tasks showed evidence for an overall right hemisphere lateralization when perceiving emotions. Additionally, results suggest the free-view chimeric face task provides a fair approximation of visual field bias in an efficient manner (i.e., without need for chin rests, timed presentations, or special software). However, this right hemisphere lateralization was only found in the context of judging the intensity of a perceived emotion.

60

Emotion recognition demonstrated evidence of a right visual field bias (i.e., left hemisphere lateralization) similar to that of a verbal recognition task (Experiment

3). In any case, the lack of an interaction between valence and visual field for either process, shows that the valence hypothesis is implausible.

The set of experiments reported here suggest that it is likely that “emotional perception” is a multifaceted process. Perhaps the processes of judging an emotion’s intensity and recognizing an emotion are two of these facets. Future research in this area should further explore these differences between judgments of intensity and recognition. Another possibility is that the left hemisphere lateralization in recognizing emotions may be the result of a dependence on verbal labeling to complete the task. Further exploration of these differences in lateralization may help to further clear up the differences and relationships between these processes. If nothing else, it gets us one step closer to our “perfect mapping of the psychological states on cerebral states”.

61

Bibliography

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57, 1060–1073. doi:10.1037/0003- 066X.57.12.1060

Ashwin, C., Wheelwright, S., & Baron-Cohen, S. (2005). Laterality biases to chimeric faces in Asperger Syndrome: What is right about face-processing? Journal of Autism and Developmental Disorders, 35(2), 183–196. https://doi.org/10.1007/s10803-004-1997-3

Baijal, S., & Srinivasan, N. (2011). Emotional and hemispheric asymmetries in shifts of attention: An ERP study. Cognition & Emotion, 25(2), 280–294. https://doi.org/10.1080/02699931.2010.492719

Barca, L., Cornelissen, P., Simpson, M., Urooj, U., Woods, W., & Ellis, A. W. (2011). The neural basis of the right visual field advantage in reading: An MEG analysis using virtual electrodes. Brain and Language, 118(3), 53–71. https://doi.org/10.1016/j.bandl.2010.09.003

Berker, E. A., Berker, A. H., & Smith, A. (1986). Translation of Broca’s 1865 report: Localization of speech in the third left frontal convolution. Archives of Neurology, 43(10), 1065–1072. https://doi.org/10.1001/archneur.1986.00520100069017

Bernieri, F., & Brown, J. A. (2019). Unpublished data. Oregon State University, Corvallis, OR.

Borod, J. C., Caron, H. S., & Koff, E. (1981). Asymmetry of facial expression related to handedness, footedness, and eyedness: A quantitative study. Cortex, 17(3), 381–390.

Borod, J. C., Haywood, C. S., & Koff, E. (1997). Neuropsychological aspects of facial asymmetry during emotional expression: A review of the normal adult literature. Neuropsychology Review, 7(1), 41–60.

Borod, J. C., Koff, E., Yecker, S., Santschi, C., & Schmidt, J. M. (1998). Facial asymmetry during emotional expression: Gender, valence, and measurement technique. Neuropsychologia, 36(11), 1209–1215. https://doi.org/10.1016/S0028-3932(97)00166-8

62

Boucher, R., Bryden, M. P., & Roy, E. A. (1996). A construct approach to the assessment of handedness. Unpublished Manuscript.

Bourne, V. J. (2005). Lateralised processing of positive facial emotion: Sex differences in strength of hemispheric dominance. Neuropsychologia, 43(6), 953–956. https://doi.org/10.1016/j.neuropsychologia.2004.08.007

Bourne, V. J. (2006). The divided visual field paradigm: Methodological considerations. Laterality: Asymmetries of Body, Brain and Cognition, 11(4), 373–393. https://doi.org/10.1080/13576500600633982

Bourne, V. J. (2008). Chimeric faces, visual field bias, and reaction time bias: Have we been missing a trick? Laterality: Asymmetries of Body, Brain and Cognition, 13(1), 92–103. https://doi.org/10.1080/13576500701754315

Bourne, V. J. (2010). How are emotions lateralised in the brain? Contrasting existing hypotheses using the Chimeric Faces Test. Cognition & Emotion, 24(5), 903– 911. https://doi.org/10.1080/02699930903007714

Bourne, V. J., & Gray, D. L. (2011). One face or two? Contrasting different versions of the chimeric faces test. Laterality: Asymmetries of Body, Brain and Cognition, 16(5), 559–564. https://doi.org/10.1080/1357650X.2010.498119

Bryden, M. P. (1965). Tachistoscopic recognition, handedness, and cerebral dominance. Neuropsychologia, 3, 1–8.

Burt, D. M., & Perrett, D. I. (1997). Perceptual asymmetries in judgements of facial attractiveness, age, gender, speech, and expression. Neurophychologia, 35(1), 685–693.

Campbell, R. (1978). Asymmetries in interpreting and expressing a posed facial expression. Cortex, 14(3), 327–342. https://doi.org/10.1016/S0010- 9452(78)80061-6

Davidson, R. J. (1992). Emotion and affective style: Hemispheric substrates. Psychological Science, 3(1), 39–43. https://doi.org/10.1111/j.1467- 9280.1992.tb00254.x

63

Davidson, R. J., Mednick, D., Moss, E., Saron, C., & Schaffer, C. E. (1987). Ratings of emotion in faces are influenced by the visual field to which stimuli are presented. Brain and Cognition, 6(4), 403–411. https://doi.org/10.1016/0278- 2626(87)90136-9

Dimberg, U., & Petterson, M. (2000). Facial reactions to happy and angry facial expressions: Evidence for right hemisphere dominance. Psychophysiology, 37(5), 693–696. https://doi.org/10.1111/1469-8986.3750693

Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63(6), 1004–1013. https://doi.org/10.3758/BF03194519

Eastwood, J. D., Smilek, D., & Merikle, P. M. (2003). Negative facial expression captures attention and disrupts performance. Perception & Psychophysics, 65(3), 352–358. https://doi.org/10.3758/BF03194566

Elias, L. J., Bryden, M. P., & Bulman-Fleming, M. B. (1998). Footedness is a better predictor than is handedness of emotional lateralization. Neuropsychologia, 36(1), 37–43. https://doi.org/10.1016/S0028-3932(97)00107-3

Ellis, A. W., & Brysbaert, M. (2010). Divided opinions on the split fovea. Neuropsychologia, 48(9), 2784–2785. https://doi.org/10.1016/j.neuropsychologia.2010.04.030

Failla, C. V., Sheppard, D. M., & Bradshaw, J. L. (2003). Age and responding-hand related changes in performance of neurologically normal subjects on the line- bisection and chimeric-faces tasks. Brain and Cognition, 52(3), 353–363. https://doi.org/10.1016/S0278-2626(03)00181-7

Farough Abed. (1991). Cultural influences on visual scanning patterns. Journal of Cross-Cultural Psychology, 22(4), 525–534.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146

Feyereisen, P. (1991). Brain pathology, lateralization, and nonverbal behavior. In R. S. Feldman & B. Rimé (Eds.), Fundamentals of Nonverbal Behavior (pp. 32– 72). Cambridge: Cambridge University Press.

64

Fischer, B., & Boch, R. (1983). Saccadic eye movements after extremely short reaction times in the monkey. Brain Research, 260(1), 21–26. https://doi.org/10.1016/0006-8993(83)90760-6

Gazzaniga, M. S., & Sperry, R. W. (1967). Language after section of the cerebral commissures. Brain, 90(1), 131–148.

Gazzaniga, Michael S. (1967). The split brain in man. Scientific American, 217(2), 24–29. https://doi.org/10.1038/scientificamerican0867-24

Geffen, G., Bradshaw, J. L., & Wallace, G. (1971). Interhemispheric effects on reaction time to verbal and nonverbal visual stimuli. Journal of Experimental Psychology, 87(3), 415–422. https://doi.org/10.1037/h0030525

Gezeck, S., Fischer, B., & Timmer, J. (1997). Saccadic reaction times: A statistical analysis of multimodal distributions. Vision Research, 37(15), 2119–2131. https://doi.org/10.1016/S0042-6989(97)00022-9

Gilbert, C., & Bakan, P. (1973). Visual asymmetry in perception of faces. Neuropsychologia, 11(3), 355–362.

Gupta, G., & Pandey, R. (2010). Assessment of hemispheric asymmetry: Development and psychometric evaluation of a chimeric face test. Industrial Psychiatry Journal, 19(1), 30–36. https://doi.org/10.4103/0972-6748.77632

Hagenbeek, R. E., & Van Strien, J. W. (2002). Left–right and upper–lower visual field asymmetries for face matching, letter naming, and lexical decision. Brain and Cognition, 49(1), 34–44. https://doi.org/10.1006/brcg.2001.1481

Hardyck, C., & Petrinovich, L. F. (1977). Left-handedness. Psychological Bulletin, 84(3), 385–404.

Heller, W., & Levy, J. (1981). Perception and expression of emotion in right-handers and left-handers. Neurophychologia, 19(2), 263–272.

Hoptman, M. J., & Levy, J. (1988). Perceptual asymmetries in left-and right-handers for cartoon and real faces. Brain and Cognition, 8(2), 178–188.

Hugdahl, K. (2005). Symmetry and asymmetry in the human brain. European Review, 13(2), 119–133.

65

Innes, B. R., Burt, D. M., Birch, Y. K., & Hausmann, M. (2016). A leftward bias however you look at it: Revisiting the emotional chimeric face task as a tool for measuring emotion lateralization. Laterality: Asymmetries of Body, Brain and Cognition, 21(4–6), 643–661. https://doi.org/10.1080/1357650X.2015.1117095

Jaeger, J., Borod, J. C., & Peselow, E. D. (1987). Depressed patients have atypical hemispace biases in the perception of emotional chimeric faces. Journal of Abnormal Psychology, 96(4), 321–324. https://doi.org/10.1037/0021- 843X.96.4.321

Johannes, S., Münte, T. F., Heinze, H. J., & Mangun, G. R. (1995). Luminance and spatial attention effects on early visual processing. Cognitive Brain Research, 2(3), 189–205. https://doi.org/10.1016/0926-6410(95)90008-X

Jordan, T. R., Patching, G. R., & Milner, A. D. (2000). Lateralized word recognition: Assessing the role of hemispheric specialization, modes of lexical access, and perceptual asymmetry. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 1192–1208. https://doi.org/10.1037/0096- 1523.26.3.1192

Jordan, T. R., Paterson, K. B., & Stachurski, M. (2008). Re-evaluating split-fovea processing in word recognition: Effects of retinal eccentricity on hemispheric dominance. Neuropsychology, 22(6), 738–745. https://doi.org/10.1037/a0013140

Keles, P., Diyarbakirli, S., Tan, M., & Tan, U. (1997). Facial asymmetry in right-and- left-handed men and women. International Journal of Neuroscience, 91(3), 147–160.

Levy, J., Heller, W., Banich, M. T., & Burton, L. A. (1983). Asymmetry of perception in free viewing of chimeric faces. Brain and Cognition, 2(4), 404–419. https://doi.org/10.1016/0278-2626(83)90021-0

Levy, J., Trevarthen, C., & Sperry, R. W. (1972). Perception of bilateral chimeric figures following hemispheric deconnexion. Brain, 95(1), 61–78. https://doi.org/10.1093/brain/95.1.61

Ley, R. G., & Bryden, M. P. (1979). Hemispheric differences in processing emotions and faces. Brain and Language, 7(1), 127–138. https://doi.org/10.1016/0093- 934X(79)90010-5

66

Lindquist, K. A., Barrett, L. F., Bliss-Moreau, E., & Russell, J. A. (2006). Language and the perception of emotion. Emotion, 6(1), 125–138. https://doi.org/10.1037/1528-3542.6.1.125

Luh, K. E. (1998). Effect of inversion on perceptual biases for chimeric faces. Brain and Cognition, 37(1), 105–108.

Luh, K. E., Rueckert, L. M., & Levy, J. (1991). Perceptual asymmetries for free viewing of several types of chimeric stimuli. Brain and Cognition, 16(1), 83– 103.

Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135. https://doi.org/10.3758/s13428-014-0532-5

Mandal, M. K., & Ambady, N. (2004). Laterality of facial expressions of emotion: Universal and culture-specific influences. Behavioural Neurology, 15(1, 2), 23–34.

Mark Davies, & Dee Gardner. (2010). Word frequency list of American English [Database]. Retrieved from Word Frequency Data website: http://www.wordfrequency.info/

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314–324. https://doi.org/10.3758/s13428-011-0168-7

Mayer, J. D., Salovey, P., & Caruso, D. R. (2002). Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT): User’s Manual. Torondo, Canada: Multi-Health Systems.

Mayer, J. D., Salovey, P., & Caruso, D. R. (2012). The validity of the MSCEIT: Additional analyses and evidence. Emotion Review, 4(4), 403–408. https://doi.org/Doi: 10.1177/1754073912445815

Miles, W. R. (1929). Ocular dominance demonstrated by unconscious sighting. Journal of Experimental Psychology, 12(2), 113–126. https://doi.org/10.1037/h0075694

Mishkin, M., & Forgays, D. G. (1952). Word recognition as a function of retinal locus. Journal of Experimental Psychology, 43(1), 43–48. https://doi.org/10.1037/h0061361

67

Montebarocci, O., Surcinelli, P., Rossi, N., & Baldaro, B. (2011). Alexithymia, verbal ability and emotion recognition. Psychiatric Quarterly, 82(3), 245–252. https://doi.org/10.1007/s11126-010-9166-7

Moreno, C. R., Borod, J. C., Welkowitz, J., & Alpert, M. (1990). Lateralization for the expression and perception of facial emotion as a function of age. Neuropsychologia, 28(2), 199–209.

Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neurophychologia, 9, 97–113.

Orbach, J. (1967). Differential recognition of Hebrew and English words in right and left visual fields as a function of cerebral dominance and reading habits. Neuropsychologia, 5(2), 127–134. https://doi.org/10.1016/0028- 3932(67)90014-0

Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389– 443. https://doi.org/10.1093/brain/60.4.389

Perlovsky, L. I. (2009). Emotions, language, and Sapir-Whorf hypothesis. 2009 International Joint Conference on Neural Networks, 2501–2508. https://doi.org/10.1109/IJCNN.2009.5178891

Prete, G., Laeng, B., Fabri, M., Foschi, N., & Tommasi, L. (2015). Right hemisphere or valence hypothesis, or both? The processing of hybrid faces in the intact and callosotomized brain. Neuropsychologia, 68, 94–106. https://doi.org/10.1016/j.neuropsychologia.2015.01.002

Pujol, J., Deus, J., Losilla, J. M., & Capdevila, A. (1999). Cerebral lateralization of language in normal left-handed people studied by functional MRI. Neurology, 52(5), 1038. https://doi.org/10.1212/WNL.52.5.1038

Reuter-Lorenz, P., & Davidson, R. J. (1981). Differential contributions of the two cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19(4), 609–613. https://doi.org/10.1016/0028- 3932(81)90030-0

Roberson, D., Damjanovic, L., & Kikutani, M. (2010). Show and tell: The role of language in categorizing facial expression of emotion. Emotion Review, 2(3), 255–260. https://doi.org/10.1177/1754073910361979

68

Roberts, R. D., Schulze, R., O’Brien, K., MacCann, C., Reid, J., & Maul, A. (2006). Exploring the validity of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) with established emotions measures. Emotion, 6(4), 663–669. https://doi.org/10.1037/1528-3542.6.4.663

Snellen, H. (1862). Snellen eye chart [Measurement Instrument]. Retrieved from https://www.allaboutvision.com/eye-test/snellen-chart.pdf

Snowden, R. J., Thompson, P., & Troscianko, T. (2011). Basic Vision: An Introduction to Visual Perception (Rev. ed). Oxford: Oxford University Press.

Szaflarski, J. P., Binder, J. R., Possing, E. T., McKiernan, K. A., Ward, B. D., & Hammeke, T. A. (2002). Language lateralization in left-handed and ambidextrous people. Neurology, 59(2), 238. https://doi.org/10.1212/WNL.59.2.238

Tremblay, T., Ansado, J., Walter, N., & Joanette, Y. (2007). Phonological and semantic processing of words: Laterality changes according to gender in right- and left-handers. Laterality: Asymmetries of Body, Brain and Cognition, 12(4), 332–346. https://doi.org/10.1080/13576500701307148

Wedding, D., & Cyrus, P. (1986). Recognition of emotion in hemifaces presented to the left and right visual fields. International Journal of Neuroscience, 30(3), 161–164. https://doi.org/10.3109/00207458608985666

White, M. J. (1969). Laterality differences in perception: A review. Psychological Bulletin, 72(6), 387–405. https://doi.org/10.1037/h0028343

Wolff, W. (1933). The experimental study of forms of expression. Journal of Personality, 2(2), 168–176. https://doi.org/10.1111/j.1467-6494.1933.tb02092.x

69

Appendix A: Measures

70

Divided visual field word sets

arms arts arks army band bank back barn coal coat cord cost desk dark deck duck fire file fare fact lock look lack land pant part park pack rent rest rust spot slot stop step tape tone tube tune board brand bread brush block black blast blend clock clack check chalk count couch coach coast front floor flour field globe grade grape guide place plane plant plate shade shape sheep shore state start stake slate trait trail trial train

71

Edinburgh Handedness Inventory

EHI Please indicate your preferences in the use of hands in the following activities by clicking the circle in the appropriate column. Where the preference is so strong that you would never try to use the other hand unless absolutely forced to, choose the "X-Always". If in any case you are really indifferent click the "Either" circle. Some of the activities require both hands. In these cases the part of the task, or object, for which hand preference is wanted is indicated in brackets.

Left Right Left (2) Either (3) Right (4) Always (1) Always (5) Writing (1) Drawing (2) Throwing (3) Scissors (4) Toothbrush (5) Knife (without fork) (6) Spoon (7) Broom (upper hand) (8) Striking Match (match) (9) Opening box (lid) (10) Which foot do you prefer to kick with? (11) Which eye do you use when using only one? (12)

72

Snellen Eye Chart

DIRECTIONS FOR USE

For the best accuracy (and to prevent memorization), have someone assist you when testing your vision with this eye chart. If you use eyeglasses or contact lenses for driving or other distance vision tasks, wear them during the test.

1. Place the chart on a wall or easel 10 feet away. 2. Cover one eye with your hand, a large spoon or some other item that completely blocks the vision of the covered

eye. (Do not apply pressure to the covered eye, as it might that eye’s vision when you test it.)

3. Identify a line on the chart you can comfortably read. Read the letters on that line aloud. Have your assistant

stand near the chart and record your accuracy.

4. Continue trying to read the letters on each successively smaller line. Do not squint. 5. Have your assistant stop you when you fail to correctly identify at least 50 percent of the letters on a line. 6. Switch to the other eye and repeat.

Record your visual acuity for each eye by noting the line for which you correctly identified either: a) More than half the letters on that line, but not all of them. b) All letters on that line, plus a few letters (less than half) on the next line.

Examples:

If you correctly identify five of the seven letters on the 20/32 line, your visual acuity for that eye is: 20/32–2/7 If you correctly identify all seven letters on the 20/32 line and three of the eight letters on the 20/25 line, your visual acuity in that eye is: 20/32+3/8

73

74

Appendix B: Instructions and Debriefing Statements

75

RESEARCH CONSENT FORM

Study Title: Handedness and Lateralized Functions Principal Investigator: Frank J. Bernieri Study team: Rafael Robles, Quinn Downey, Drew Kutcher, Shrida Sharma Version: 11/19/2018

We are inviting you to take part in a research study.

Purpose: The aim of this study is to validate existing measures of multiple lateralized functions (e.g. handedness and footedness) and investigate their relationship, if any, with interpersonal perception and behavior.

You should not be in this if you: Are under 18 years old, cannot read in English, are colorblind, have less than 20/20 vision and do not have adequate corrective lenses, or do not have the ability to come to the research lab in Reed Lodge, OSU, Corvallis, OR.

Voluntary: Participation in this study is voluntary (i.e. You do not have to be in the study if you do not want to). There are no consequences (e.g. impact on grades, etc.) of any kind if you decide you do not want to participate. If you choose to participate, you may withdraw at any time. However, all items must be answered if you would like your results to be included in the study results.

Activities: In this particular version of our study, you will be asked to verify your visual acuity using a standard eye chart. You will then be tested for eye dominance using a simple behavioral measure. This will be followed by a few activities administered via a computer. E.g., you will be shown sets of stimuli such as words and faces and will be asked questions about them. After a brief break, you will be asked to complete self-report measures of handedness, footedness, and your ability to recognize emotions. Finally, you will be asked a few demographic questions about yourself.

Time: Your participation in this study will last about 90 minutes.

Confidentiality: Confidentiality will be kept to the extent permitted by the technology being used and to the extent permitted by law. However, please be aware that information collected via any digital device can be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Therefore, we cannot absolutely guarantee with complete certainty the security and confidentiality of any information collected within this investigation.

Payment: You will receive 1.5 hours of research engagement credit for participating.

Study contacts: We would like you to ask us questions if there is anything about the study that you do not understand. If you have any questions about this research project, please contact: Dr. Frank Bernieri at [email protected] or (541) 737-1373.

If you have questions about your rights or welfare as a participant you can also contact the Human Research Protection Program with any concerns that you have about your rights or welfare as a study participant. This office can be reached at (541) 737-8008 or by email at [email protected]

76

Overview Instructions

Instructions

In this study you will complete 4 different tasks. In each task you will be shown stimuli for a brief time period and asked to make judgments on them using the keyboard.

Each task will have slightly different instructions so please pay close attention to each new set of instructions.

If you need a short break in between tasks feel free to take them during the instruction slides.

If you have any questions about the tasks please let the experimenter know before beginning the task.

Press any key to continue

77

Free-View Chimeric Face Task Instructions

Chimeric Task 2

In this section you will be shown two pictures of slightly different faces. These faces have been edited and thus may look a bit strange. You are tasked with making a judgment as to whether you believe the top or bottom photo is more emotionally expressive.

If you believe the photo shown on top is more emotionally expressive, press '1'.

If you believe the photo shown on bottom is more emotionally expressive, press '3'.

You will have as long as you want to make this judgement. This section does not require the use of the chin rest so you may use the computer in any way that is comfortable.

If you have any questions please use this time to ask. Press any key to continue.

78

Sequential Chimeric Face Task Instructions

Chimeric Task

In this section you will be shown a picture of two slightly different faces. These faces have been edited and thus may look a bit strange. The pictures will be displayed for less that 1 second.

If you believe the first photo shown was more emotionally expressive, press 1.

If you believe the second photo shown was more emotionally expressive, press 3.

Before each picture is shown a color-changing fixation dot will appear on the screen. Make sure to attend to the fixation each time.

In order to check for attention, you may be asked about the color of the dot.

If you have any questions please use this time to ask.

Press any key to continue.

79

Divided Visual Field Word Recognition Task Instructions

Word Matching

In this section two words will appear at the same time. These words will appear for less than 1 second. Afterwards you will be asked to select the word that matches one you were previously shown.

Use the '1', '2', '3', and '4' keys to make your selection.

Before the words are displayed a color-changing fixation dot will appear on the screen. You may find your eyes wanting to wander, but make sure to attend to the fixation each time.

In order to check for attention, you may be asked about the color of the dot.

If you have any questions please use this time to ask. If you feel you need a short break, use this time to do so. Press any key to continue.

80

Divided Visual Field Emotion Recognition Task Instructions

Emotion Matching

In this section, two photos of a person expressing different emotions will appear at the same time. These photos will appear for less than 1 second.

Afterwards you will be asked to select the photo (if any) that matches one of the photos you were previously shown.

Use the '1', '2', '3', and '4' keys to make your selections.

Before the photos are displayed, a color changing fixation dot will appear on the screen. You may find your eyes wanting to wander, but make sure to attend to the fixation each time.

In order to check for attention, you may be asked about the color of the dot.

If you have any questions please use this time to ask. If you feel you need a short break, use this time to do so. Press any key to continue.

81

Debriefing Statement

Thank you for your participation! The study now complete.

In this study we are attempting to examine the potential implications of functions as simple as handedness can have on interpersonal perception skills. Previous research has suggested that, among right handed individuals, there is a clear bias towards the left visual field when judging emotions on faces. Previous research also suggests that our faces are not symmetrical when expressing emotions. This could mean that we are missing half of the picture when we are judging the emotions of another person.

However, these same patterns are not apparent in every person. Left handed individuals, for example, may show a bias towards the opposite side of the face! Communication between people with opposing biases could therefore have some interesting implications.

In this study your visual field bias was assessed as well as your ability to recognize emotions on faces. We hope to use this information to see if these biases cause any measurable differences in ability.

If you would like to learn more about this study or have any questions/concerns about anything that happened today, feel free to ask now or contact Dr. Bernieri in the future. [email protected]

If you do not have any questions for us then that’s all we have for you today.

Thank you again and have a great day!