Laterality: Asymmetries of Body, Brain and Cognition

ISSN: 1357-650X (Print) 1464-0678 (Online) Journal homepage: http://www.tandfonline.com/loi/plat20

Emotional language is all right: reduces hemispheric asymmetry for linguistic processing

Hazel K. Godfrey & Gina M. Grimshaw

To cite this article: Hazel K. Godfrey & Gina M. Grimshaw (2015): Emotional language is all right: Emotional prosody reduces hemispheric asymmetry for linguistic processing, Laterality: Asymmetries of Body, Brain and Cognition, DOI: 10.1080/1357650X.2015.1096940

To link to this article: http://dx.doi.org/10.1080/1357650X.2015.1096940

Published online: 27 Oct 2015.

Submit your article to this journal

Article views: 16

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=plat20

Download by: [Victoria University of Wellington] Date: 10 November 2015, At: 14:03 LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION, 2015 http://dx.doi.org/10.1080/1357650X.2015.1096940

Emotional language is all right: Emotional prosody reduces hemispheric asymmetry for linguistic processing Hazel K. Godfrey and Gina M. Grimshaw School of Psychology, Victoria University of Wellington, Wellington, New Zealand

ABSTRACT In an influential paper, Bryden and MacRae [(1989). Dichotic laterality effects obtained with emotional words. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 1, 171–176] introduced a dichotic listening task that allowed for the assessment of linguistic and prosodic processing asymmetries using the same stimuli. The task produces a robust right ear advantage (REA) for linguistic processing (identifying a target word), and a left ear advantage for emotional processing (identifying a target prosody). Here, we adapted this paradigm to determine whether and how the presence of emotional prosody might modulate hemispheric asymmetry for linguistic processing. Participants monitored for a target word among dichotic stimuli consisting of two different words, but spoken in the same emotional prosody—neutral, angry, happy, sad,orfearful. A strong REA was observed when the words were spoken in neutral prosody, which was attenuated for all the emotional prosodies. There were no differences in the ear advantage as a function of or discrete , indicating that all had similar effects. Findings are consistent with the hypothesis that the right hemisphere is better able to process speech when it carries emotional prosody. Implications for understanding of right hemisphere language functions are discussed.

ARTICLE HISTORY Received 6 July 2015; Accepted 16 September 2015

KEYWORDS Hemispheric specialization; prosody; emotion; language; speech perception

Language hardly ever lacks emotional tone. Both what you say and how you Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 say it are important aspects of day-to-day communication. These two com- ponents of language are lateralized differently in the brain, with the left hemi- sphere specialized for processing linguistic aspects such as phonology and semantics, and the right hemisphere specialized for processing emotional prosody—the changes in pitch, loudness, and speed that we use to signal our emotional state or intention (see Hellige, 2001 for a review of early studies on language asymmetries). Competent language use requires not

CONTACT Gina M. Grimshaw [email protected] © 2015 Taylor & Francis 2 H. K. GODFREY AND G. M. GRIMSHAW

only that these two systems each function well, but also that they interact to produce an integrated and unified language experience. Although the first evidence for the differential lateralization of linguistic and prosodic processing came from patient studies (Borod et al., 1998; Heilman, Scholles, & Watson, 1975; Ross, 1981; Schlanger, Schlanger, & Gerst- mann, 1976), Phil Bryden’s dichotic listening research has been highly influen- tial in establishing a methodology that demonstrates this dissociation in neurologically intact people. In an early dichotic listening study, Ley and Bryden (1982) presented semantically neutral sentences in different tones of voice to each ear. Participants were more accurate in identifying the meaning of the sentence in the right ear (reflecting left hemisphere specializ- ation), and the tone of voice in the left ear (reflecting right hemisphere specialization). This experiment was important in showing that the same stimulus material could yield different laterality effects depending on the stimulus dimension that was assessed. Bryden and MacRae (1989) simplified the experimental paradigm to use single words—bower, dower, power, and tower—spoken in angry, happy, sad, or neutral tones of voice. Participants monitored both ears (a divided- paradigm) for the presentation of a target word in one block of trials, and for a target emotion in another block of trials. Similar results to Ley and Bryden were observed, in that detec- tion of a target word yielded a clear right ear advantage (REA) and detection of a target emotion yielded a clear left ear advantage (LEA). Again, results show that hemispheric specialization is driven by the processing demands of the task, and not the stimulus itself. The version developed by Bryden and MacRae (1989) provided a number of methodological improvements. First, the words differed only in the first phoneme, allowing for good control of stimulus onset, which is important for fusion of the dichotic stimuli (Repp, 1977). Second, the monitoring task (as opposed to stimulus report) minimized the role of memory or output modality in determining performance asymmetry. And finally, the use of a motor response to a single event made it possible to assess hemispheric asymmetry with response time (RT) measures as well as accuracy. These prop- erties of the task made it easy to administer to a broad range of participants, and to modify in ways suitable for addressing a variety of research questions. Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 The task has since been used in both adults and children to study linguistic and prosodic asymmetries related to personality traits such as schizotypy (Castro & Pearson, 2011; Najt, Bayer, & Hausmann, 2012); to psychopathology including attention deficit hyperactivity disorder (Hale, Zaidel, McGough, Phil- lips, & McCracken, 2006), learning disabilities (Obrzut, Bryden, Lange, & Bulman-Fleming, 2001), and psychopathy (Hiatt, Lorenz, & Newman, 2002); to levels of prenatal and circulating hormones (Grimshaw, Bryden, & Finegan, 1995; Mead & Hampson, 1996); and to peripheral asymmetries including handedness (Bryden, Free, Gagné, & Groff, 1991), footedness LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 3

(Elias, Bryden, & Bulman-Fleming, 1998), and nostril dominance (Saucier, Tessem, Sheerin, & Elias, 2004). It has been modified to assess the effects of different response formats (Stirling, Cavill, & Wilkinson, 2000; Voyer, Bowes, & Soraggi, 2009; Voyer, Russell, & McKenna, 2002) and attentional manipula- tions (Hale et al., 2006;Leshem,2013; Leshem, Arzouan, & Armony-Sivan, 2015; Mondor & Bryden, 1992). It has even been adapted to produce versions in Dutch (Patel & Licht, 2000) and Spanish (Enriquez & Bernabeu, 2008). The hemispheric asymmetries have been further validated with functional magnetic resonance imaging (fMRI) (Buchanan et al., 2000) and transcranial doppler ultra- sonography (Vingerhoets, Berckmoes, & Stroobant, 2003). Across all these studies, the basic finding of complementary hemispheric asymmetries for lin- guistic and prosodic processing is remarkably robust. One issue that has received considerable attention concerns valence effects in prosodic asymmetries. This debate is related to the broader theor- etical discussion about asymmetries in emotional processing more generally. The two prevailing views are the right hemisphere hypothesis (by which all emotions are lateralized to the right hemisphere; Borod et al., 1998) and the valence hypothesis, which posits left hemisphere specialization for posi- tive (typically approach-motivated) emotions and right hemisphere specializ- ation for negative (typically withdrawal-motivated) emotions (Harmon-Jones, Gable, & Peterson, 2010; Heller, 1993). In the original Bryden and MacRae (1989) study, the LEA for the emotional task was stronger for sad and angry prosodies than for happy prosody, with neutral prosody showing no ear advantage. However, in subsequent studies there is no clear pattern of results. Some report the strongest LEAs for negative valences (e.g., Grimshaw, Kwasny, Covell, & Johnson, 2003), but others have found stronger LEAs for happy prosody (e.g., Obrzut et al., 2001), and still others have not found any sig- nificant differences in the LEA as a function of valence (e.g., Elias et al., 1998). This heterogeneity of effects suggests that there are not systematic differences in asymmetry associated with valence, but there may be idiosyncratic (or perhaps spurious) effects of valence that arise in individual studies. The lack of systematic valence effects is consistent with an emerging consensus that the right hemisphere is specialized for the perception of all emotional proso- dies, but valence effects can emerge when tasks induce emotional experience Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 or entail considerable top-down control over emotional processing (Rodway & Schepman, 2007; Schepman, Rodway, & Geddes, 2012). So far, all the studies described have used Bryden’s methodology to study left hemisphere linguistic and right hemisphere prosodic processes in iso- lation. However, the task (and its variants) can also reveal ways in which lin- guistic and prosodic processes interact. For example, Bulman-Fleming and Bryden (1994) modified the task to have participants listen for a conjoint target—a target word spoken in a target tone of voice. When both dimensions of the stimulus became task relevant, participants showed a pattern of errors 4 H. K. GODFREY AND G. M. GRIMSHAW

that reflected complementary hemispheric differences. Most errors (and more than would be expected by chance) were illusory conjunctions, in which the target word was present in one ear, and the target tone of voice was present in the other ear. These illusory conjunctions were more common when the target word was in the right ear and the target tone of voice in the left ear than the reverse arrangement, suggesting that interhemispheric integration is optimal when each hemisphere contributes according to its strength. In another modification, the stimulus words were changed to semantically emotional words (e.g., mad, sad, glad, and fad) to produce Stroop-like stimuli in which the emotional value of the word and the tone of voice could be either consistent or inconsistent (Grimshaw, 1998; Techentin & Voyer, 2007; Techentin, Voyer, & Klein, 2009). These studies used interference between stimulus dimensions as an indicator of hemispheric interaction. In our own lab, we have examined the interaction between stimulus dimensions by determining how asymmetry on the linguistic task can be modulated by the task-irrelevant emotional prosody of the stimuli. Our first experiment (Grimshaw et al., 2003) was inspired by an off-hand observation from Phil himself (personal communication), who noted that the ear advan- tages for the linguistic task were not as large as those often seen with tra- ditional non-emotional stimuli. He was right. We modified the presentation format during the linguistic monitoring task so that two different words were presented to each ear, but they were presented in the same prosody, which we manipulated between blocks. The prosody itself was task irrelevant. The REA was attenuated when words were presented in sad prosody relative to neutral prosody, suggesting that the presence of emotional prosody facili- tated right hemisphere speech processing. The effect of emotional prosody on linguistic processing asymmetries highlights the fact that linguistic proces- sing does not occur in ; factors such as the presence of emotional prosody can shift the relative contributions of each hemisphere. The facili- tation of right hemisphere contributions to linguistic processing is consistent with findings that hemispheric asymmetry for language processing is relative, and not absolute (e.g., Zaidel, 1976). In a follow-up study, we replicated the attenuation of the linguistic REA by sad prosody but showed that it was not modulated by either happy or angry Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 prosody (Grimshaw, Séguin, & Godfrey, 2009). This emotion-specific effect was unexpected, given support for the right hemisphere hypothesis in emotional perception. We were left with an unresolved question: what is it about sad prosody (specifically) that facilitates right hemisphere linguistic processing? If we could identify the characteristics of sad prosody that contribute to the attenuation of the REA, we might better identify the mechanisms that facili- tate right hemisphere linguistic processing. We previously suggested two possible mechanisms (Grimshaw et al., 2009). First, is a withdrawal- motivated state, whereas and are approach-motivated LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 5

states (Davidson, 1993; Harmon-Jones et al., 2010). The motivational-direction hypothesis attributes experience of approach-related emotions to the left hemisphere and withdrawal-related emotions to the right hemisphere (see Davidson, 1992, 1993; Harmon-Jones et al., 2010, for reviews). Perhaps sad prosody facilitated right hemisphere linguistic processing via activation of a sad, withdrawal-motivated state. Alternatively, the effects of sad prosody might have been driven by low- level acoustic properties of speech. The asymmetric sampling in time theory (Boemio, Fromm, Braun, & Poeppel, 2005; Poeppel, 2003) posits left hemi- sphere sampling of speech signals over short windows (about 20–40 ms) and right hemisphere sampling over longer windows (about 150–250 ms). Sad prosody entails a slower speech rate than angry or happy prosody (see Scherer, 1986, 2003, for reviews of acoustic properties of emotional speech). In Grimshaw et al. (2009), the sad prosodic stimuli had initial plosives that were on average 94 ms in duration, compared to neutral prosody (50 ms), angry prosody (63 ms), and happy prosody (57 ms). The slower tempo of the sad prosodic stimuli is more in line with the right hemisphere sampling rate than the faster left hemisphere rate, which could result in an attenuated REA for a linguistic task with sad prosody.

The current study To further our understanding of how emotional prosody modulates asymme- tries for linguistic processing, participants completed a dichotic listening task in which they monitored for target words (bower, dower, power, and tower) spoken in five prosodies (neutral, angry, happy, sad, and fearful). As in our pre- vious experiments, both words were spoken in the same prosody for a com- plete block of trials, and the prosody itself was irrelevant for the task. We addressed two questions with this experiment. Is the REA for words spoken in emotional prosodies attenuated compared to neutral prosody (that is, can we replicate our previous effects)? And if so, are there differences in attenuation of the REA across prosodies? The use of fearful prosody is a new addition to this paradigm; we included it because fearful speech is nega- tive and withdrawal related (like sadness), but it is also fast (like happiness and Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 anger; Scherer, 1986, 2003). If differential effects are again observed for different prosodies, then the addition of may allow us to distinguish between motivational and psychoacoustic explanations for right hemisphere facilitation. The emotional prosodies cover negative and positive emotions; approach and withdrawal- related emotions; and have different acoustic profiles. Furthermore, contrary to our previous studies, we used a fully within-subjects design, with a large sample, in which all participants completed the linguistic task with all emotional prosodies, providing increased statistical power. Thus, our design 6 H. K. GODFREY AND G. M. GRIMSHAW

should allow us to settle the debate on valence-specific shifts in linguistic pro- cessing for words spoken in emotional prosody.

Method Participants The participants were 582 first-year psychology students who completed the experiment as part of a laboratory exercise. The study was approved by the School of Psychology Ethics Committee at Victoria University of Wellington, New Zealand.

Stimuli and apparatus The stimuli were the words bower, dower, power, and tower spoken in neutral, angry, happy, sad, and fearful prosodies. The neutral, angry, happy, and sad stimuli were the same as those recorded by Grimshaw et al. (2009); the additional fearful stimuli used here were spoken by the same New Zealand male and recorded under the same conditions. Acoustic properties for the fearful stimuli were measured in the same manner as in Grimshaw et al., using PRAAT (Boersma & Weeknink, 2007) to extract the fundamental fre- quency (F0), and two raters estimated the duration of the initial plosive using a combination of auditory playback and visual inspection of the sound wave. The only difference was that a 75–500 Hz window was used for extraction of F0 with the autocorrelation method (75–350 Hz was used for the other prosodies) in order to account for the broader spectral profile of fearful prosody. See Table 1 for acoustic properties of all stimuli. The stimuli were presented on Dell PCs via Psychology Software Tools’ E-Prime software (version 1.0; Schneider, Eschman, & Zuccolotto, 2002) through stereo earphones.

Procedure Participants first answered basic demographic questions including their age,

Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 sex, native language, and hearing status. They also indicated which hand they used to write, and whether they considered themselves to be left- or

Table 1. Acoustic properties of the stimuli by emotional prosody. Prosody Mean F0 (Hz) Median F0 (Hz) SD F0 (Hz) Plosive duration (ms) Neutral 104 103 3 50 Angry 184 195 45 63 Happy 195 179 75 57 Sad 123 113 25 94 Fearful 347 359 47 45 Note: F0, fundamental frequency. LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 7

right-handed. Participants were considered to be right-handed if they answered “right” to both handedness questions. The experiment was run in groups of up to 20 people. Participants sat in individual testing carrels. Dichotic listening studies are traditionally conducted with tight experimental control in quiet environments, but recent studies suggest that effects are more robust than once thought, and hemispheric asymmetry has now been studied successfully even on smartphone devices in uncontrolled environments (Bless et al., 2013). Although we expected our data might be noisier than is typical (given the testing environment), the large sample and within-subjects design should compensate for the increased noise. Participants were instructed to put their headphones on, ensuring that the left earphone was on their left ear and the right earphone on their right ear. Before starting the experiment, they played a sentence (batman is a hero) and set the volume themselves at a comfortable level. On each experimental trial, they were presented with two stimuli at the same time, one in the left ear and one in the right ear. The trials were divided into 5 prosody blocks of 48 trials each, for a total of 240 trials. Each block was made up of four sub-blocks. At the beginning of each sub-block participants were given a different target (bower, dower, power, or tower—target order was fixed) to monitor for (across both ears), pressing 1 on the number pad with their right index finger if the target was present, and 2 with their right middle finger if it was absent. In each sub-block, the target was present on six trials (three left ear and three right ear) and absent on six trials. Across sub-blocks, the target was present 24 times in each ear for each prosody. Prosody block order (neutral, angry, happy, sad, and fearful) was counterbalanced across partici- pants. Participants were instructed to respond as quickly as possible, without making mistakes. They had up to 4,000 ms to respond from the onset of the dichotic stimulus. Before the main experiment trials, participants first listened to all 20 stimuli (four words × five prosodies) presented binaurally. They then did 24 dichotic practice trials with neutral prosody (12 with the target bower and 12 with the target tower) to familiarize themselves with the task. Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 Design This experiment was a five (prosody: neutral, angry, happy, sad, and fearful) × 2 (ear: left and right) within-subjects design. Hit rates, false alarm rates, and mean response times for hits (correctly identifying the presence of a target) were calculated for each prosody × ear condition. Hit rates and false alarm rates (assuming that false alarms were equally distributed across ears) were used to calculate d’, a signal-detection measure of sensitivity, for each prosody × ear condition. The dependent variables were d’ (sensitivity to 8 H. K. GODFREY AND G. M. GRIMSHAW

discriminate between target present and target absent) for targets in the left and right ears and mean RT for responding present correctly to targets in the left and right ears. Only trials in which participants responded between 200 and 2,000 ms were included in the analyses to remove anticipatory and overly long response times.

Results We removed data from those participants who would normally be excluded from dichotic listening studies of language asymmetries. Participants were excluded if they were not right-handed,1 n = 111; if English was not their first language, n = 39; or if they had hearing problems, n = 4. Further, a domi- nant ear was identified for each participant, based on neutral prosody per- formance (d’) in the left ear and the right ear. Participants were excluded if they had d’ scores of less than or equal to 0.8 in their dominant ear for all pro- sodies, n = 31. Participants who had fewer than or equal to five trials left in their dominant ear for any prosodies were excluded, n = 45. One participant’s data did not record correctly and so was also excluded from the analyses. These exclusions left a sample of 351 participants (97 men and 254 women; mean age = 19.30 years, SD age = 3.85 years, minimum age = 17, maximum age = 53). When the assumption of sphericity was violated, the Greenhouse–Geisser correction was used. For significant paired-samples comparisons, Cohen’s d was calculated as the mean difference/standard deviation.

Accuracy d’ performance was examined in a 2 (ear: left and right) × 5 (prosody: neutral, angry, happy, sad, and fearful) repeated-measures ANOVA. There were main h2 = . effects of ear, F(1, 350) = 328.621, p < .001, p 484, and prosody, F(4, h2 = . fi 1,400) = 46.358, p < .001, p 117, which were quali ed by an ear × h2 = . prosody interaction, F(4, 1,400) = 21.468, p < .001, p 058. The hit rate data were also analysed and the results mirrored those of d’ (see Table 2).

Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 To follow up the significant ear × prosody interaction, first paired-samples t-tests with a Bonferroni corrected alpha level of .0125 were run to compare ear advantage scores for each of the emotional prosodies to the ear advan- tage score for neutral prosody. There were REAs for all prosodies, however, the REAs for target detection of stimuli spoken in all emotional prosodies were attenuated (indicating an increase in RH linguistic processing) compared to the REA for target detection of stimuli spoken in neutral prosody;

1Note: we ran the analyses with left-handers who met the other inclusion criteria (n = 45) included, and the same main effects and interactions were observed in the d’, hit rate, and RT data. LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 9

Table 2. Performance by emotional prosody and ear. Left ear Right ear REA Left ear Right ear REA Left ear Right ear REA Hit rate d’ Mean RT (ms) Prosody Mean (SD) Mean (SD) Mean (SD) Neutral 6.23 9.74 3.51 1.37 2.29 .92 862 798 64 (2.69) (1.92) (3.77) (.67) (.67) (.99) (194) (145) (167) Angry 7.40 9.73 2.33 1.64 2.25 .62 871 839 32 (2.68) (1.71) (3.40) (.75) (.62) (.91) (183) (164) (172) Happy 7.74 9.76 2.02 1.75 2.30 .55 902 879 22 (2.48) (1.75) (3.31) (.69) (.62) (.90) (195) (170) (163) Sad 7.39 9.50 2.11 1.74 2.29 .55 921 891 29 (2.28) (1.72) (3.01) (.65) (.62) (.80) (186) (153) (169) Fearful 7.49 9.40 1.91 1.41 1.90 .49 874 848 26 (2.37) (1.73) (3.19) (.64) (.64) (.86) (176) (153) (150)

angry–neutral, t(350) = −5.761, p < .001, d = .31; happy–neutral, t(350) = −6.531, p < .001, d = .35; sad–neutral, t(350) = −6.820, p < .001, d = .37; fear– neutral, t(350) = −8.269, p < .001, d = .44. Second, paired-samples t-tests with a Bonferroni corrected alpha level of .0083 were run to compare ear advan- tage scores for each of the emotional prosodies to each other. There were no significant differences between any of the prosodies. See Figure 1 and Table 2 for ear advantage scores.

Response time Mean RT performance was examined in a 2 (ear: left and right) × 5 (prosody: neutral, angry, happy, sad, and fearful) repeated-measures ANOVA. There h2 = . were main effects of ear, F(1, 350) = 44.146, p < .001, p 112, and Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015

Figure 1. REAs, measured as the difference between the right ear and the left ear in d’ scores, for the linguistic task for the neutral and emotional prosodic blocks. 10 H. K. GODFREY AND G. M. GRIMSHAW

h2 = . fi prosody, F(4, 1,400) = 31.708, p < .001, p 083, which were quali ed by an h2 = . ear × prosody interaction, F(3.885, 1,359.901) = 4.417, p = .002, p 012. We followed up the significant ear × prosody interaction with the same set of post hoc t-tests used in the d’ analysis. There were REAs for all prosodies, however, the REAs for target detection of stimuli spoken in an emotional prosody were attenuated (indicating an increase in RH processing) compared to the REA for target detection of stimuli spoken in neutral prosody. angry– neutral, t(350) = 2.84, p = .005, d = .15; happy–neutral, t(350) = 3.569, p < .001, d = .19; sad–neutral, t(350) = 2.875, p = .004, d = .15; fear–neutral, t(350) = 3.434, p = .001, d = .18. As in the d’ analysis, there were no significant differ- ences between any of the prosodies. See Figure 2 and Table 2 for ear advan- tage scores.

Discussion Using a dichotic listening task inspired by Bryden and MacRae (1989), we addressed two outstanding issues concerning the effects of task-irrelevant emotional prosody on hemispheric asymmetry for linguistic processing. First, clear attenuation of the REA was observed when the words were spoken in emotional prosody compared to neutral prosody. This general emotional prosodic modulation was observed in a powerful study with a large sample and a within-subjects design. Ours is the first study using the dichotic listening paradigm to robustly demonstrate that emotional prosody in general (regardless of valence or specific emotion) alters hemi- spheric asymmetry for linguistic processing. The d’ and RT data clearly show an attenuation of the REA when the words were spoken in emotional prosody compared to when they were spoken in neutral prosody. Identifying Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015

Figure 2. REAs, measured as the difference between the right ear and the left ear in mean RT (ms), for the linguistic task for the neutral and emotional prosodic blocks. LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 11

targets that only differ on the initial plosive is a very linguistic (phonological) task. Such tasks typically show a strong REA (Bryden & MacRae, 1989). The attenuation of the REA when the words were spoken with emotional prosody suggests that the presence of prosody activates the right hemi- sphere, allowing it to have input into the linguistic decision as per our pre- vious studies (Grimshaw et al., 2003, 2009). Second, there was no indication of right hemisphere linguistic facilitation being limited to any particular emotional prosody; angry, happy, sad, and fearful prosodies similarly attenuated the REA for the linguistic task. Our hypotheses concerning emotion-specific differences are therefore moot. Of course, we are arguing here from a null effect (the lack of difference across prosodies). However, in the context of our powerful experimental design and the large difference between emotional and neutral prosodies (all Cohen’s d in the range .31–.44 in the d’ analyses), the non-effect as a function of specific prosody is compelling. Why do we not observe any specific emotion differences when other studies have (e.g., Bryden & MacRae, 1989; Elias et al., 1998; Grimshaw et al., 2009)? The power of our large within-subjects design, compared to smaller between-subjects designs used previously, is likely the main factor. The experiment in Grimshaw et al. (2009) was perhaps underpowered to detect a three-way interaction, and the difference across prosodies could be spur- ious. Unsurprisingly, given resource constraints, most psychological research is underpowered, and laterality research is no exception (see Button et al., 2013, for a discussion of the low power of samples used in the neuroscience literature). Worryingly for the state of the literature, there is a chance that any effects observed in small samples will not be representative of processing; they will be spurious effects (Button et al., 2013). Given publication biases, spurious effects are more likely to be published than spurious non-effects. We are increasingly seeing large sample sizes in laterality research; a trend that Phil would have applauded. The effects observed in modulation of hemi- spheric asymmetry are generally not large effects; our large within-subjects sample gave us greater power to overcome spurious effects. Alternatively, the existence (or not) of valence or emotion-specific effects may depend on task demands or strategic processing. Valence-specific Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 accounts, such as the motivational-direction hypothesis (Davidson, 1992, 1993), require that areas related to emotional experience be activated. Related to this point, Rodway and Schepman (2007) and Schepman et al. (2012) have suggested that valence-specific effects will only be observed when emotional experience or top-down processing of the emotion is required by the task. In the present experiment, the emotional prosody of the stimuli was task irrelevant and was therefore unlikely to activate frontal areas related to top-down control. Support for this hypothesis comes from a series of fMRI studies (Wildgruber, Ackermann, Kreifelts, & Ethofer, 2006) 12 H. K. GODFREY AND G. M. GRIMSHAW

that show that the presence of emotional prosody is largely associated with activation in right posterior auditory cortex, and that frontal areas only come online with tasks that require explicit identification of the prosody. In future research, it might be possible to test this hypothesis by using a focused-attention manipulation (Leshem, 2013; Leshem et al., 2015). The Bryden and Macrae (1989) design, which we followed here, uses a divided- attention paradigm in which participants monitor for a target word across both ears. Asymmetries observed under divided attention can be driven by both structural (bottom-up) factors related to stimulus processing, and atten- tional (top-down) factors related to the deployment of spatial attention (Hugdahl, 2000; Mondor & Bryden, 1991). A focused-attention manipulation would minimize attentional influences and so would provide a clearer indi- cation of the extent to which the effects of emotional prosody on linguistic asymmetries are driven by stimulus factors. Importantly, the lack of emotion-specific effects in the present study argues against valence or motivational direction, speech rate of the prosodies, the pitch of the prosodies, target word, or idiosyncratic dichotic pairings of stimuli as being important factors in the modulation of the linguistic REA. Attenuation of the REA was observed across all prosodies (which varied in valence, motivational direction, duration of the initial phoneme, and funda- mental frequency measures), as well as across trials (which varied in terms of the dichotic pairing and linguistic target). Furthermore, the conclusion that all prosodies are lateralized to the right hemisphere (i.e., as predicted by the right hemisphere hypothesis) is consistent with an fMRI study using Bryden and MacRae’s(1989) original stimuli (Buchanan et al., 2000) and with other fMRI studies of emotional prosody (e.g., Mitchell, Elliott, Barry, Crut- tenden, & Woodruff, 2003; Wildgruber et al., 2006). Thus, the results of the present study fit with the previous literature and suggest that emotional prosody activates right hemisphere auditory processing areas, which appears to facilitate right hemisphere linguistic processing. A next step in this line of research is to investigate the consequences of the increased contribution of the right hemisphere language system when speech is spoken in an emotional tone of voice. Emotional speech, it is important to remember, is the mode of most day-to-day language. The right hemisphere is Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 known to activate a more diffuse set of meanings compared to the left hemi- sphere (see Beeman & Chiarello, 1998; Jung-Beeman, 2005; Lindell, 2006 for reviews) which is important in metaphor comprehension (e.g., Mashal, Faust, Hendler, & Jung-Beeman, 2007), ambiguous word processing (e.g., Atchley, Grimshaw, Schuster, & Gibson, 2011), decoding humour (e.g., Coulson & Williams, 2005), and discourse processing (e.g., Beeman, 1998). One prediction that derives from these observations is that emotional speech may qualitatively differ from neutral speech on a number of semantic and pragmatic dimensions. For example, we might expect to see more diffuse LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 13

patterns of semantic priming or greater activation of subordinate meanings when the speech signal itself is emotionally laden. Another productive line of research might be to examine interactions between emotional prosody and linguistic asymmetry in those with emotional disorders such as or . In a review of dichotic lis- tening studies in depression, Gadea, Espert, Salvador, and Martí-Bonmatí (2011) suggest lesser recruitment of the right hemisphere by emotional prosody in depressed people. If so, then their language asymmetries may be less sensitive to prosodic effects, which may in turn have consequences for other aspects of language processing.

Summary Over 25 years ago, Phil Bryden gave us a methodologically strong paradigm to address a fascinating question in laterality research—how the left and right hemispheres differ qualitatively and quantitatively in their processing of the speech signal, and how the hemispheres interact to create emotional language. Even today, his foundational research is driving new research direc- tions that show no signs of slowing. Building on work started by Bryden and MacRae (1989), in this paper we show that emotional prosody attenuates the REA for linguistic processing. Future research should consider the impact of the increased contribution of the right hemisphere to linguistic processing by bringing together the literature on hemispheric asymmetry for emotion and the literature on the right hemisphere’s role in language more broadly.

Acknowledgements Thank you to Tash Buist for assistance in coordinating data collection, to Matthew Duignan for his vocal work, and to Adele Quiqley-McBride for her assistance in analysis of the sound files.

Disclosure statement No potential conflict of was reported by the authors. Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 ORCID Gina M. Grimshaw http://orcid.org/0000-0002-1291-1368

References Atchley, R. A., Grimshaw, G. G., Schuster, J., & Gibson, L. (2011). Examining lateralized lexical ambiguity processing using dichotic and cross-modal tasks. Neuropsychologia, 49, 1044–1051. doi:10.1016/j.neuropsychologia.2011.01.010 14 H. K. GODFREY AND G. M. GRIMSHAW

Beeman, M. (1998). Coarse semantic coding and discourse comprehension. In M. Beeman & C. Chiarello (Eds.), Right hemisphere language comprehension: Perspectives from cognitive neuroscience (pp. 255–284). Mahwah, NJ: Erlbaum. Beeman, M. J., & Chiarello, C. (1998). Complementary right- and left-hemisphere language comprehension. Current Directions in Psychological Science, 7(1), 2–8. Bless, J. J., Westerhausen, R., Arciuli, J., Kompus, K., Gudmundsen, M., & Hugdahl, K. (2013). “Right on all occasions?”–On the feasibility of laterality research using a smartphone dichotic listening application. Frontiers in Psychology, 4, Article no. 42. doi:10.3389/fpsyg.2013.00042 Boemio, A., Fromm, S., Braun, A., & Poeppel, D. (2005). Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nature Neuroscience, 8(3), 389– 395. doi:10.1038/nn1409 Boersma, P., & Weenink, D. (2007). PRAAT: Doing phonetics by computer (Version 4.6.06). Borod, J. C., Cicero, B. A., Obler, L. K., Welkowitz, J., Erhan, H. M., Santschi, C., … Whalen, J. R. (1998). Right hemisphere emotional perception: Evidence across multiple chan- nels. Neuropsychology, 12(3), 446–458. Bryden, M. P., Free, T., Gagné, S., & Groff, P. (1991). Handedness effects in the detection of dichotically-presented words and emotions. Cortex, 27(2), 229–235. Bryden, M. P., & MacRae, L. (1989). Dichotic laterality effects obtained with emotional words. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 1, 171–176. Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah, N. J., Zilles, K., & Jäncke, L. (2000). Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9(3), 227–238. doi:10.1016/ S0926-6410(99)00060-9 Bulman-Fleming, M. B., & Bryden, M. P. (1994). Simultaneous verbal and affective laterality effects. Neuropsychologia, 32(7), 787–797. doi:10.1016/0028-3932(94) 90017-5 Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews. Neuroscience, 14(5), 365–376. doi:10. 1038/nrn3475 Castro, A., & Pearson, R. (2011). Lateralisation of language and emotion in schizotypal personality: Evidence from dichotic listening. Personality and Individual Differences, 51(6), 726–731. doi:10.1016/j.paid.2011.06.017 Coulson, S., & Williams, R. F. (2005). Hemispheric asymmetries and joke comprehension. Neuropsychologia, 43(1), 128–141. doi:10.1016/j.neuropsychologia.2004.03.015 Davidson, R. J. (1992). Emotion and affective style: Hemispheric substrates. Psychological Science, 3(1), 39–43. Davidson, R. J. (1993). Parsing affective space: Perspectives from neuropsychology and Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 psychophysiology. Neuropsychology, 7(4), 464–475. Elias, L. J., Bryden, M. P., & Bulman-Fleming, M. B. (1998). Footedness is a better predic- tor than is handedness of emotional lateralization. Neuropsychologia, 36(1), 37–43. doi:10.1016/S0028-3932(97)00107-3 Enriquez, P., & Bernabeu, E. (2008). Hemispheric laterality and dissociative tendencies: Differences in emotional processing in a dichotic listening task. Consciousness and Cognition, 17(1), 267–275. doi:10.1016/j.concog.2007.06.001 Gadea, M., Espert, R., Salvador, A., & Martí-Bonmatí, L. (2011). The sad, the angry, and the asymmetrical brain: Dichotic listening studies of negative and depression. Brain and Cognition, 76(2), 294–299. doi:10.1016/j.bandc.2011.03.003 LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 15

Grimshaw, G. M. (1998). Integration and interference in the cerebral hemispheres: Relations with hemispheric specialization. Brain and Cognition, 36(2), 108–127. doi:10.1006/brcg.1997.0949 Grimshaw, G. M., Bryden, M. P., & Finegan, J. A. K. (1995). Relations between prenatal testosterone and cerebral lateralization in children. Neuropsychology, 9(1), 68–79. doi:10.1037/0894-4105.9.1.68 Grimshaw, G. M., Kwasny, K. M., Covell, E., & Johnson, R. A. (2003). The dynamic nature of language lateralization: Effects of lexical and prosodic factors. Neuropsychologia, 41(8), 1008–1019. doi:10.1016/S0028-3932(02)00315-9 Grimshaw, G. M., Séguin, J. A., & Godfrey, H. K. (2009). Once more with : The effects of emotional prosody on hemispheric specialisation for linguistic processing. Journal of Neurolinguistics, 22(4), 313–326. doi:10.1016/j.jneuroling. 2008.10.005 Hale, T. S., Zaidel, E., McGough, J. J., Phillips, J. M., & McCracken, J. T. (2006). Atypical brain laterality in adults with ADHD during dichotic listening for emotional intona- tion and words. Neuropsychologia, 44(6), 896–904. doi:10.1016/j.neuropsychologia. 2005.08.014 Harmon-Jones, E., Gable, P. A., & Peterson, C. K. (2010). The role of asymmetric frontal cortical activity in emotion-related phenomena: A review and update. Biological Psychology, 84(3), 451–462. doi:10.1016/j.biopsycho.2009.08.010 Heilman, K. M., Scholles, R., & Watson, R. T. (1975). Auditory affective agnosia. Disturbed comprehension of affective speech. Journal of Neurology, Neurosurgery & Psychiatry, 38(1), 69–72. Hellige, J. B. (2001). Hemispheric asymmetry: What’s right and what’s left. Boston: Harvard University Press. Heller, W. (1993). Neuropsychological mechanisms of individual differences in emotion, personality, and . Neuropsychology, 7(4), 476–489. Hiatt, K. D., Lorenz, A. R., & Newman, J. P. (2002). Assessment of emotion and language processing in psychopathic offenders: Results from a dichotic listening task. Personality and Individual Differences, 32(7), 1255–1268. doi:10.1016/S0191-8869 (01)00116-7 Hugdahl, K. (2000). Lateralization of cognitive processes in the brain. Acta Psychologia, 105(2–3), 211–235. doi:10.1016/S0001-6918(00)00062-7 Jung-Beeman, M. (2005). Bilateral brain processes for comprehending natural language. TRENDS in Cognitive Science, 9(11), 512–518. doi:10.1016/j.tics.2005.09.009 Leshem, R. (2013). The effects of attention on ear advantages in dichotic listening to words and affects. Journal of Cognitive Psychology, 25(8), 932–940. doi:10.1080/ 20445911.2013.834905 Leshem, R., Arzouan, Y., & Armony-Sivan, R. (2015). The effects of sad prosody on hemi- spheric specialization for words processing. Brain and Cognition, 96,28–37. doi:10. Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 1016/j.bandc.2015.03.002 Ley, R. G., & Bryden, M. P. (1982). A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content. Brain & Cognition, 1(1), 3–9. Lindell, A. K. (2006). In your right mind: Right hemisphere contributions to language processing and production. Neuropsychology Review, 16(3), 131–148. doi:10.1007/ s11065-006-9011-9 Mashal, N., Faust, M., Hendler, T., & Jung-Beeman, M. (2007). An fMRI investigation of neural correlates underlying the processing of novel metaphoric expressions. Brain and Language, 100(2), 115–126. doi:10.1016/j.bandl.2005.10.005 16 H. K. GODFREY AND G. M. GRIMSHAW

Mead, L. A., & Hampson, E. (1996). Asymmetric effects of ovarian hormones on hemi- spheric activity: Evidence from dichotic and tachistoscopic tests. Neuropsychology, 10(4), 578–587. Mitchell, R. L. C., Elliott, R., Barry, M., Cruttenden, A., & Woodruff, P. W. R. (2003). The neural response to emotional prosody, as revealed by functional magnetic reson- ance imaging. Neuropsychologia, 41(10), 1410–1421. doi:10.1016/S0028-3932(03) 00017-4 Mondor, T. A., & Bryden, M. P. (1991). The influence of attention of the dichotic REA. Neuropsychologia, 29(14), 1179–1190. doi:10.1016/0028-3932(91)90032-4 Mondor, T. A., & Bryden, M. P. (1992). On the relation between auditory spatial attention and auditory perceptual asymmetries. Perception & Psychophysics, 52(4), 393–402. Najt, P., Bayer, U., & Hausmann, M. (2012). Atypical lateralisation in emotional prosody in men with schizotypy. Laterality: Asymmetries of Body, Brain and Cognition, 17(5), 533–548. doi:10.1080/1357650X.2011.586702 Obrzut, J. E., Bryden, M. P., Lange, P., & Bulman-Fleming, M. B. (2001). Concurrent verbal and emotion laterality effects exhibited by normally achieving and learning disabled children. Child Neuropsychology: A Journal on Normal and Abnormal Development in Childhood and Adolescence, 7, 153–161. doi:10.1076/chin.7.3.153. 8743 Patel, T. K., & Licht, R. (2000). Verbal and affective laterality effects in P-dyslexic, L-dys- lexic and normal children. Child Neuropsychology: A Journal on Normal and Abnormal Development in Childhood and Adolescence, 6, 157–174. doi:10.1076/chin.6.3.157. 3158 Poeppel, D. (2003). The analysis of speech in different temporal integration windows: Cerebral lateralisation as ‘asymmetric sampling in time’. Speech Communication, 41 (1), 245–255. doi:10.1016/S0167-6393(02)00107-3 Repp, B. H. (1977). Measuring laterality effects in dichotic listening. The Journal of the Acoustical Society of America, 62(3), 720–737. Rodway, P., & Schepman, A. (2007). Valence specific laterality effects in prosody: Expectancy account and the effects of morphed prosody and stimulus lead. Brain and Cognition, 63(1), 31–41. doi:10.1016/j.bandc.2006.07.008 Ross, E. D. (1981). The aprosodias. Functional-anatomic organization of the affective components of language in the right hemisphere. Archives of Neurology, 38(9), 561–569. Saucier, D. M., Tessem, F. K., Sheerin, A. H., & Elias, L. (2004). Unilateral forced nostril breathing affects dichotic listening for emotional tones. Brain and Cognition, 55(2), 403–405. doi:10.1016/j.bandc.2004.02.059 Schepman, A., Rodway, P., & Geddes, P. (2012). Valence-specific laterality effects in vocal emotion: Interactions with stimulus type, blocking and sex. Brain and Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015 Cognition, 79(2), 129–137. doi:10.1016/j.bandc.2012.03.001 Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99(2), 143–165. Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(1–2), 227–256. doi:10.1016/S0167-6393(02)00084-5 Schlanger, B. B., Schlanger, P., & Gerstmann, L. J. (1976). The perception of emotionally toned sentences by right hemisphere-damaged and aphasic subjects. Brain and Language, 3(3), 396–403. Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime suite. (Version 1.0). Pittsburgh: Psychology Software Tools. LATERALITY: ASYMMETRIES OF BODY, BRAIN AND COGNITION 17

Stirling, J., Cavill, J., & Wilkinson, A. (2000). Dichotically presented emotionally intoned words produce laterality differences as a function of localisation task. Laterality, 5, 363–371. doi:10.1080/713754388 Techentin, C., & Voyer, D. (2007). Congruency, attentional set, and laterality effects with emotional words. Neuropsychology, 21(5), 646–655. doi:10.1037/0894-4105.21.5.646 Techentin, C., Voyer, D., & Klein, R. M. (2009). Between- and within-ear congruency and laterality effects in an auditory semantic/emotional prosody conflict task. Brain and Cognition, 70, 201–208. doi:10.1016/j.bandc.2009.02.003 Vingerhoets, G., Berckmoes, C., & Stroobant, N. (2003). Cerebral hemodynamics during discrimination of prosodic and semantic emotion in speech studied by transcranial doppler ultrasonography. Neuropsychology, 17(1), 93–99. doi:10.1037/0894-4105.17. 1.93 Voyer, D., Bowes, A., & Soraggi, M. (2009). Response procedure and laterality effects in : Implications for models of dichotic listening. Neuropsychologia, 47(1), 23–29. doi:10.1016/j.neuropsychologia.2008.08.022 Voyer, D., Russell, A., & McKenna, J. (2002). On the reliability of laterality effects in a dichotic emotion recognition task. Journal of Clinical and Experimental Neuropsychology, 24(5), 605–614. doi:10.1076/jcen.24.5.605.1007 Wildgruber, D., Ackermann, H., Kreifelts, B., & Ethofer, T. (2006). Cerebral processing of linguistic and emotional prosody: fMRI studies. Progress in Brain Research, 156, 249– 268. doi:10.1016/S0079-6123(06)56013-3 Zaidel, E. (1976). Auditory vocabulary of the right hemisphere following brain bisection or hemidecortication. Cortex, 12(3), 191–211. Downloaded by [Victoria University of Wellington] at 14:03 10 November 2015