<<

NeuroImage 82 (2013) 295–305

Contents lists available at SciVerse ScienceDirect

NeuroImage

journal homepage: www.elsevier.com/locate/ynimg

Emotion modulates activity in the ‘what’ but not ‘where’ auditory processing pathway

James H. Kryklywy d,e, Ewan A. Macpherson c,f, Steven G. Greening b,e, Derek G.V. Mitchell a,b,d,e,⁎ a Department of Psychiatry, University of Western Ontario, London, Ontario N6A 5A5, Canada b Department of Anatomy & Cell Biology, University of Western Ontario, London, Ontario N6A 5A5, Canada c National Centre for , University of Western Ontario, London, Ontario N6A 5A5, Canada d Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5A5, Canada e Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 5A5, Canada f School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 5A5, Canada article info abstract

Article history: Auditory cortices can be separated into dissociable processing pathways similar to those observed in the vi- Accepted 8 May 2013 sual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to Available online 24 May 2013 neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how interacts with dorsal processing streams, and essentially nothing is known about the impact of Keywords: emotion on auditory localization. In the current study, we used fMRI in concert with individualized Auditory localization Emotion auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization fi Auditory processing pathways task. Surprisingly, participants were signi cantly slower to localize emotional relative to neutral sounds. A Dual processing pathways separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent Virtual environment of emotion. When applied to the main experimental task, a significant main effect of location, but not emo- fMRI tion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of au- ditory were modulated by sound location; however, additional anterior-lateral areas of demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region re- sembled areas described in dual pathway models of auditory processing as the ‘what’ processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the ‘what’ pathway) independent of loca- tion and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the ‘what,’ but not the ‘where,’ auditory processing pathway. © 2013 Elsevier Inc. All rights reserved.

Introduction processing; thus, in the visual domain, emotional stimuli elicit greater activity than similar neutral stimuli within areas of the The ability to interact effectively in an environment requires the (Morris et al., 1998; Vuilleumier and Driver, 2007). Just like in the visual accurate recognition and localization of surrounding objects and the domain, studies of auditory processing have demonstrated that the capacity to prioritize these objects for behavior. One characteristic analysis of emotional auditory stimuli occurs rapidly (Goydke et al., known to modulate this is the emotional nature of the stimuli 2004; Sauter and Eimer, 2009) and is associated with enhanced activity (Adolphs, 2008; Lang and Davis, 2006; Pessoa and Ungerleider, in sensory (i.e., auditory) cortices (Fecteau et al., 2007; Viinikainen et 2004; Vuilleumier, 2005). Considerable evidence suggests that emo- al., 2012). Despite some emerging work concerning the influence of tional visual stimuli gain rapid and often preferential access to the emotion on the representation of auditory objects, essentially nothing brain's processing resources. At the behavioral level, emotional visual is known about how emotion influences auditory stimulus localization. stimuli are detected faster than neutral stimuli (Graves et al., 1981), There is accumulating evidence that auditory processing occurs are more likely to enter into (Amting et al., 2010; Mitchell within two separate cortical streams (Ahveninen et al., 2006; Alain et and Greening, 2012) and can cause significantly greater influence on al., 2001; Barrett and Hall, 2006; Clarke et al., 2002; Lomber and task-relevant behaviors (Mitchell et al., 2008; Vuilleumier and Driver, Malhotra, 2008; Mathiak et al., 2007; Rauschecker, 2012; Rauschecker 2007). These effects are thought to be conferred by enhanced sensory and Tian, 2000) that may share some similarities with the well-established dorsal and ventral processing streams of the visual sys- tem (Haxby et al., 1991; Milner and Goodale, 1993). Spatial cues used ⁎ Corresponding author at: Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 5B7, Canada. Fax: +1 519 663 3935. for localization are processed primarily in posterior-medial regions of E-mail address: [email protected] (D.G.V. Mitchell). the auditory cortex (Arnott et al., 2004; Bushara et al., 1999; Lomber

1053-8119/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.neuroimage.2013.05.051 296 J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 et al., 2007) including the posterior superior temporal (STG) and interactions between, emotion and location during auditory stimu- the transverse temporal gyrus. In contrast, sound identity cues, includ- lus localization. Contrary to expectations, the results showed that ing pitch and features, are processed in anterior-lateral re- emotion did not modulate regions of auditory cortex sensitive to lo- gions of auditory cortex along the anterior STG (Altmann et al., 2008; cation. However, a distinct region of anterior lateral temporal cortex Warren and Griffiths, 2003). However, despite continuous advances to- identified in this exploratory study was modulated by emotion. This ward understanding the neural mechanisms underlying both enhanced area strongly resembled regions associated with sound-identity representation of emotion within sensory cortices and our representa- processing in previous studies (i.e., the putative ‘what’ pathway; tions of auditory space, the impact of emotion during auditory localiza- Barrett and Hall, 2006; Warren and Griffiths, 2003). To help deter- tion remains unknown. Specifically, it remains unclear whether mine the degree to which this area could be characterized as part evidence of enhanced activity observed in prior studies to emotional of the ‘what’ auditory pathway, a follow-up localizer was conducted relative to neutral, non-spatialized auditory stimuli (Fecteau et al., in a subset of participants in a subsequent session. This functional 2007; Viinikainen et al., 2012) would also translate into enhanced audi- ‘what’ pathway localizer identified ROIs that were modulated by tory stimulus localization and enhanced activity in areas of auditory sound identity while location and emotion were held constant. The cortex sensitive to object location. resulting ROI was extracted and applied to the data generated from The potential of auditory virtual environments (AVEs) as a meth- Task 2, allowing us to independently test the effects of emotion on od to examine neural pathways associated with auditory stimulus lo- the resulting ‘what’ pathway. calization has been described in previous studies (Bohil et al., 2011; Fujiki et al., 2002; Langendijk and Bronkhorst, 2000; Wightman and Methods Kistler, 1989a,b). Previous studies investigating audito- ry localization have created AVEs using generic head-related transfer Subjects functions (HRTFs) generated from measurements of mannequins or a prototypical head shape (Ahveninen et al., 2006; Bushara et al., 1999; Eighteen healthy human subjects, (9 male, 9 female) with a mean Krumbholz et al., 2009). These, however, fail to accommodate indi- age of 23.56 (range 19–35, SD 4.51), completed Tasks 1 and 2. All sub- vidual differences in head size and pinnae structure that alters jects granted informed consent and were in good mental health, as sound as it enters the ear canals, resulting in imperfect assessed by a Structured Clinical Interview for DSM-IV (Diagnostic of spatialized sounds (Middlebrooks et al., 2000; Wenzel et al., and Statistical Manual of Mental Disorders, 4th Edition). All subjects 1993). Such variables have been shown to influence reactions to had normal , normal or corrected-to-normal vision and were and ratings of emotional auditory stimuli (Vastfjall, 2003). Despite fluent English speakers. Ten of these subjects (5 male, 5 female), its potential importance, we are not aware of any neuroimaging stud- with a mean age of 24.3 (range 19–35, SD 5.42), returned to complete ies utilizing unique AVEs created from individualized HRTFs. Task 3. All participants were reimbursed for their time at the end of In the present study, we investigated whether the emotion-related the study. All experiments were approved by the Health Science Re- enhancements observed in the visual domain at the behavioral search Ethics Board at the University of Western Ontario. (Amting et al., 2010; Graves et al., 1981) and neural levels (Morris et al., 1998; Vuilleumier and Driver, 2007) would also be found during Stimuli and apparatus auditory stimulus localization. We hypothesized that positive and negative auditory cues would receive prioritized processing relative Stimuli to neutral stimuli. We predicted that this prioritization would be Twelve sounds were chosen from the International Affective Digitized reflected by increased accuracy, decreased reaction time, and in- Sound (IADS) stimuli set that were of a neutral, negative or positive affec- creased neural activity within the posterior-medial ‘where’ pathways tive nature as defined by standardized ratings (Bradley and Lang, 1999). of auditory processing during the localization of emotional compared Each stimulus category contained two single-source non-verbal human to neutral sounds. However, as will be described below, the data did vocalizations, one multi-source non-verbal human vocalization, and not fit this prediction, and instead we found slower response times one non-human sound. All sounds were presented with a variable dura- for emotional auditory stimuli compared to neutral ones. Additionally, tion of 2000–3000 ms (balanced across stimuli; variable durations were consistent with previous studies involving non-spatialized emotional used to facilitate deconvolution of the BOLD signal). Importantly, all auditory cues (Fecteau et al., 2007), we predicted that anterior- stimuli were matched for their onset amplitude and root mean-square lateral areas of auditory cortex (i.e., the putative ‘what’ processing amplitude, which ensures that the power and energy were consistent. pathway) would also show enhanced activity for emotional compared Positive and negative stimuli were balanced for arousal ratings (mean to neutral sounds. Furthermore, in light of lesion data suggesting that positive = 6.58, mean negative = 6.92) and valence levels (posi- the what/where pathways are doubly dissociable (Lomber and tive = 7.56, negative = 2.36, absolute neutral = 5). In addition, to cre- Malhotra, 2008), we predicted that anterior-lateral regions of auditory ate a novel unrecognizable noise for Task 1, the 12 task sounds of Task 2 cortex would not be modulated by sound location. were merged into a single audio file, segmented into b3msfragments, To test these predictions, we created AVEs by generating sounds and subsequently scrambled, reconstituted and cropped to a duration based on each individual's unique HRTFs. While undergoing fMRI, par- of 15,000 ms. This sound maintains the average long-term power spec- ticipants located or identified a series of auditory stimuli presented trum of the stimulus set of Task 2, while remaining unidentifiable. in these virtual environments. The current study consisted of three In order to localize neural regions that were sensitive to stimulus related tasks. Task 1 was designed as a functional localizer, aimed identity, a novel set of nine neutral sounds were chosen from the at independently identifying ROIs specifically related to sound local- IADS (mean valence 5.28, SD 0.98) for use in Task 3. These nine ization while controlling for object identity. Task 2 was conducted in sounds were human, animal, or machine in origin (3 of each). An ad- the same scanning session as Task 1. In this task, participants were ditional three segments of scrambled noise (identical to that used in required to identify the source locations of positive, negative and Task 1) comprised a fourth sound class. All sounds in this set were neutral sounds presented within a virtual auditory environment. 5000 ms in duration, and were matched for onset amplitude and This task served two purposes. First, the ‘where’ ROI derived from root-square mean amplitude. the Task 1 localizer was applied to the data in Task 2 and interrogat- ed to determine potential effects of emotion on location-sensitive Auditory virtual environment areas of auditory cortex. Second, Task 2 allowed us to perform Throughout the experiment, all sounds were presented within an an exploratory whole-brain analysis examining the effects of, and auditory virtual environment through Sensimetric MRI-Compatible J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 297

Insert Earphones. Volume was controlled by a Dayton Audio Class T Task 2: locating emotional sounds within auditory space Digital Amplifier. Initial volume was set to 88–90 dBs and adjusted In Task 2, the same 18 participants (9 female, 9 male) completed slightly to the comfort level of each individual participant. To induce an emotional auditory localization task. Participants were asked to the perceptual experience of spatialized sounds using insert-style close their eyes for the duration of each scan to reduce possible con- headphones, HRTFs were measured individually for each subject. founds of visual feedback. Each trial began with a white noise burst To obtain the HRTF measurements, miniature electret microphones (500 ms) spatialized to a location along the sagittal midline, in front (Knowles FG3629) were mounted facing outwards in foam earplugs of the participant. This acted as an ‘auditory fixation cross’ to reorient inserted flush with the ear canal entrances. The participant stood on the subject's attention to a standard central point and was immedi- an adjustable platform which positioned his or her head at the height ately followed by a spatialized target sound (2000–3000 ms) and a of a 1.5-m radius array of 16 loudspeakers (Tannoy i5 AW) spaced in period of silence (2000–3000 ms). The target was randomly 22.5-degree intervals around the listener. The array was located in a presented in one of the four possible locations. The participant was large hemi-anechoic chamber, and the floor area within the array was required to locate the target sound with a button press indicating covered with acoustical foam to attenuate reflections. The impulse re- the perceived location as quickly and accurately as possible. Over sponse from each loudspeaker to each ear microphone was measured the course of a single run, participants heard each of the 12 sounds using the averaged response to 32 periods of a 2047-point in every location a single time for a total of 48 trials (16 trials per maximum-length sequence signal (Rife and Vanderkooy, 1989) emotion condition). Additionally, there were 16 silent ‘jitter’ trials in- presented at a sampling rate of 48,828 Hz via a Tucker Davis Technolo- corporated into each run, for a total of 64 trials per run. All trials were gies RX6 real time processor and QSC CX168 power amplifiers. During selected in a random order within each run. The task run was repeat- the measurement procedure, head motion was minimized by monitor- ed six times for a total of 384 trials. ing head position with an electromagnetic tracker (Polhemus FASTRAK) while participants were asked to aim the light from a head-mounted Task 3: isolating identity-sensitive regions of auditory cortex (the ‘what’ LED accurately at a frontal target position. To correct for individual loud- pathway) speaker characteristics, each HRTF measurement was equalized in the Task 3 was conducted during a follow-up scanning session in a sub- frequency domain by dividing by the appropriate loudspeaker transfer set of the original sample to identify auditory areas associated with au- function measured with a reference microphone (Bruel & Kjaer 4189) ditory object identification. This additional functional localizer scan placed at the center of the array in the absence of the listener's head. was initiated in response to the results of Task 1 and 2, outlined The impulse responses were windowed in post-processing to remove below, in order to better explore the potential doubly dissociable effects any residual reflections. The resulting HRTF measurements were ofemotionontheauditory‘what’ and ‘where’ processing pathways. resampled to 44.1 kHz and combined with the equalization filters for Specifically, it was designed to independently derive functional ROIs the headphones supplied by the manufacturer to create a new set of au- corresponding to the ‘what’ pathway of auditory processing. This type ditory filters limited to the 10-kHz bandwidth of the headphones. These of cross-validation, using separate sources for region identification and individualized filters were then applied to each sound by time-domain signal estimation, avoids problems of circularity in interpreting neuro- convolution to create the experience of a virtual 3-dimensional auditory imaging results (Vul and Pashler, 2012), allowing us to interrogate the space; sounds presented with headphones were perceived to originate ROIsoverthetimecourseofTask2withoutfearofstatisticalbias from controlled physical locations external to the participants. (Esterman et al., 2010; Vul and Kanwisher, 2010). In the scanner, 10 par- Sounds for Tasks 1 and 2 were spatialized to four locations along ticipants (5 female, 5 male) performed a block design auditory identifi- the horizontal plane (−90°, −22.5°, 22.5° and 90° from the sagittal cation task similar to that used in Task 1. During this task, each midline; negative = left). Sounds for Task 3, along with an additional 15,000-ms sound presentation consisted of a triad of stimuli belonging 500-ms artificially generated white noise were spatialized to a loca- to a single object class presented directly in front of the listener along tion directly along the sagittal midline, in front of the listener. The en- the mid-sagittal plane within the AVE. There were four possible object tire set of stimuli (across locations, sound sources, and listeners) was classes: human, animal, machine and scrambled sounds. Over the course normalized such that each stimulus had the same root-mean-square of two runs, each object class was presented 8 times, for a total of 32 level computed on the concatenated left- and right-ear signals. events. Participants were instructed to actively attend to the identity of the sound objects for the duration of their presentation, and to indi- cate the identity of the presented sounds (human, animal, machine or Procedure scrambled) via button press. Between the presentations of each triad, there was a 15,000-ms period of silence. Presentation order was Task 1: isolating location-sensitive regions of auditory cortex (the randomized. ‘where’ pathway) Prior to entering the scanner, participants were acclimatized to Behavioral data analysis the auditory virtual environment by completing modified versions of Task 1 and Task 2. For a detailed review of these training tasks, To investigate the possible effect of emotional content on sound lo- see Supplementary materials. In Task 1, 18 participants (9 female, 9 calization, we recorded reaction times and accuracy level for the dura- male) performed a pure auditory localization task to help identify tion of Task 2. A 4 (location) × 3 (emotion) ANOVA was conducted neural regions involved in representing auditory space. This task for each of the behavioral measures. Follow-up pair-wise t-tests were was designed as a functional localizer allowing us to generate an performed to delineate the nature of any significant effects. ROI corresponding to the ‘where’ pathway of auditory processing. Participants were instructed to close their eyes and listen to a series Imaging of 15,000-ms scrambled sounds. Each sound was presented in one of the four virtual locations (8 times per location for a total of 32 MRI data acquisition events broken into 2 runs). Participants were instructed to actively Subjects were scanned during all task performances using a 3 T fixate on the sound location for the duration of its presentation, and Siemens Scanner with a 32-channel head coil. fMRI images were to indicate with a button press the location of the perceived source taken with a T2*-gradient echo-planar imaging sequence (repetition during the sound presentation. Between each sound presentation, time [TR] = 2500 ms, echo time [TE] = 36 ms; field of view there was a 15,000-ms period of silence. Order of presented sound lo- [FOV] = 18.8 cm, 78 × 78 matrix). Tasks 1 and 2 took place over a cation was randomized for each run. single scan session. Task 3 took place in a separate session. For all 298 J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305

Table 1 coefficients represent the percent signal change from the mean activ- Results of Experiment 1; accuracy and time to localization for all regressor conditions. ity. Regressor files modeling the presentation time course of relevant Emotiona Location Time to localization Localization stimuli were created for each of the 12 conditions of Task 2 (4 loca- (ms; correct trials only) accuracy (%) tions × 3 ) during correct trials only, and for each of the Positive −90° 775.1 (242.2) 83.8 (19.1) four conditions of Task 1 (4 locations) and Task 3 (4 classes of ob- −22.5° 756.2 (199.2) 85.9 (16.6) jects). The relevant hemodynamic response function was fit to each 22.5° 804.5 (199.4) 88.2 (14.1) regressor to perform linear regression modeling. To account for 90° 808.5 (251.9) 82.2 (20.0) voxel-wise correlated drifting, a baseline plus linear drift and qua- − Negative 90° 736.6 (210.0) 87.0 (15.3) β −22.5° 776.2 (218.2) 84.3 (15.4) dratic trend were modeled for each time series. This resulted in a 22.5° 779.9 (201.0) 83.1 (18.1) coefficient and t value for each voxel and regressor. To facilitate 90° 733.6 (209.1) 84.7 (17.8) group analysis, each individual's data were transformed into the stan- Neutral −90° 686.5 (153.3) 89.4 (12.0) dard space of Talairach and Tournoux. Following this, a 4 (loca- −22.5° 736.5 (194.7) 85.6 (20.9) tion) × 3 (emotion) ANOVA was performed on the imaging data 22.5° 751.7 (218.4) 88.2 (17.5) 90° 727.5 (224.6) 85.9 (18.5) from Task 2, while two separate one-way ANOVAs were performed on the imaging data from Tasks 1 and 3 to examine the effects of lo- Standard deviations are in parentheses. a Significant effects of emotion were identified in the reaction time data. These ef- cation and sound-identity respectively (4 levels of each). ANOVAs at fects of emotion were driven by a significantly slower localization of positive and neg- this level were conducted using the AFNI function GroupAna, yielding ative stimuli when compared to neutral stimuli (p b 0.001 and p b 0.05 respectively). an F value for each main effect and interaction at each voxel. Percent signal change from the mean activity was extracted from significant functional runs, complete brain coverage was obtained with 38 inter- clusters of activation for each relevant regressor using the AFNI func- leaved slices of 2.4 × 2.4 mm in plane with a slice thickness of tion 3Dmaskave. To correct for multiple comparisons, a spatial clus- 2.4 mm, forming 2.4 mm isovoxels. A series of 148 functional images tering operation was performed using AlphaSim with 1000 Monte were collected for each run of Task 2 and a series of 202 functional Carlo iterations on the whole brain EPI matrix. images were collected for each run during Tasks 1 and 3. A high res- olution T1 weighted anatomical scan was obtained covering the Results whole brain (TR = 2300 ms, TE = 4.25 ms; FOV = 25.6 cm, 192 axial slices; voxel size = 1 mm isovoxels; 256 × 256 matrix) in Behavioral results both scan sessions. To delineate the effects of emotion on auditory localization, partic- fMRI analysis ipants were presented a series of positive, negative and neutral An analysis of the fMRI data was conducted using Analysis of sounds within a virtual auditory environment while undergoing Functional NeuroImages (AFNI) software (Cox, 1996) at both the in- fMRI. Participants were instructed to localize these sounds as quickly dividual and group levels. Motion correction was performed by regis- and accurately as possible (Task 2). tering all volumes to the relevant functional volume acquired A 4 (location) × 3 (emotion) ANOVA was conducted on the reaction temporally adjacent to the anatomical scan. The data set for each par- time and error data (Table 1) to determine the impact of stimulus emo- ticipant was spatially smoothed (using an isotropic 4 mm Gaussian tion on auditory localization at a behavioral level. This yielded a signif- kernel) to reduce the influence of individual differences. The time se- icant main effect of emotion (F(2, 34) =12.617, p b 0.005) on the ries data were normalized by dividing the signal intensity of a voxel at reaction times. We had originally predicted that emotion would en- each time point by the mean signal intensity of that voxel for each run hance the localization of auditory stimuli. However, contrary to these and multiplying the result by 100. Thus, resultant regression predictions, follow-up t-tests revealed that this effect was driven by a

Table 2 Neural regions modulated by location, emotion and identity within the auditory domain.

Effect R/L Location BA X Y Z Vol. (mm3)

‘Where’ ROIa R STG, TTG 41/22/13 44 −22 12 9342 L STG, TTG 41/22/13 −44 −25 10 3807 Whole brain analysis: locationa R STG, TTG, MTG 42/41/39/37 48 −27 12 22,113 L STG, TTG 42/41 −48 −23 10 10,908 R IPL, 7 17 −68 41 5319 R Precuneus 7 10 −46 47 3186 R Pre-central gyrus 6 29 −7 52 945 R Post-central gyrus 4/3 48 −16 52 729 R IPL 40 43 −47 50 567 R Post-central gyrus 3 23 −32 59 513 R MOG 19 32 −76 24 459 Whole brain analysis: emotionb R STG 22/21 55 −8 2 8668 L STG 22/21 −54 −11 2 4995 Whole brain analysis: interactionc R/L Precuneus, SPL, CC 31/7 −6 −54 45 4728 R/L , MOG 19/18/17 3 −83 9 1617 R STG 42/41/22 60 −16 6 387 ‘What’ ROI R STG, TTG 42/41/22 54 −20 8 4266

Significant clusters are thresholded at p b 0.005 (corrected to p b 0.05). STG = ; TTG = transverse temporal gyrus; MTG = ; IPL = ; MOG = middle occipital gyrus; CC = . Note: XYZ are Talairach coordinates and refer to center of mass. a Effects of location in both main effects are driven by significantly increased neural activity for sounds presented in the contralateral hemifield. This is consistent in all cortical areas. b Effects of emotion are driven by a significantly increased neural response to positive and negative stimuli compared to neutral stimuli. c Effects of sound identity were driven by significantly increased neural response to identifiable stimuli (human, animal, machine) rather than scrambled sounds. J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 299 significantly slower localization of emotional sounds (both positive and extended from the primary auditory cortices (BA 41/42) anterior and negative) when compared to neutral sounds (p b 0.001 and p b 0.05 re- inferior along the superior temporal gyrus (to BA 22; Fig. 2C). This re- spectively). There was no significant effect of sound location (F(3, 51) = gion showed significantly greater activation bilaterally during the .616, p > 0.05) or location × emotion interaction (F(6, 102) = 1.891, presentation of emotional sounds (positive and negative), when p > 0.05) on reaction time. The same analysis applied to the localization compared to neutral sounds (p b 0.001 and p b 0.005 respectively; error data yielded no significant effects for either sound location Fig. 2D).

(F(3, 51) =0.158,p > 0.05) or emotion (F(2, 34) = 1.783, p > 0.05), nor Asignificant interaction was identified within regions of the right au- did it yield a significant location × emotion interaction (F(6, 102) = ditory cortex (BA 41/42/22; Fig. 3A), bilateral precuneus (BA 31/7; 1.320, p > 0.05). Fig. 3C), and bilateral (BA 17/18/19; Fig. 3 E; p b 0.005, corrected at p b 0.05; Table 2; Figs. 3A–F). To identify the nature of Imaging results these effects, a series of one-way ANOVAs were performed, and where significant (p b 0.05), these were followed-up with paired t-tests. First, Interrogating location-sensitive regions of auditory cortex (the ‘where’ the impact of location within each emotion was examined. The right au- pathway) ditory cortex showed greater activity to positive stimuli presented in In order to determine the impact of sound location independent of the contralateral relative to ipsilateral hemifield (p b 0.01); however, no sound identity, location-related activity was assessed during the localiza- such effect was observed for neutral or negative stimuli (p > 0.05). The tion of unrecognizable scrambled sounds (Task 1). A one-way ANOVA impact of emotion within each location was also examined. Sounds (four locations) conducted on the whole brain EPI data obtained during coming from the contralateral hemifield (i.e., 90° and 22.5° left of the Task 1 identified a significant effect of sound location (p b 0.005; sagittal midline) elicited greater activity in the right auditory cortex corrected at p b 0.05) within regions of the temporal cortex (Table 2). for emotional relative to neutral sounds (positive > neutral, p b 0.001; This activation included primary auditory cortex (BA 41/42) and extend- negative > neutral, p b 0.01; positive > negative p b 0.05). These results ed medially along the transverse temporal gyrus (BA 13; Fig. 1A). This are illustrated in Fig. 3B. region showed greatest activity to stimuli positioned far in the contralat- Bilateral precuneus and occipital lobe displayed strikingly similar eral hemifield, decreasing activity to midline sounds, and significantly patterns of activation. Within both regions, the impact of location with- less activity to ipsilaterally positioned sounds (p b 0.005; Fig. 1B). This in emotion was significant during the presentation of negative areawasthenusedasanauditory‘where’ pathway ROI, and applied to (p b 0.005) but not neutral (p > 0.05) sounds. Within bilateral the BOLD data collected in Task 2. The percent signal change to all loca- precuneus, the impact of location during the presentation of positive tions and emotional categories within this ROI was extracted using stimuli was significant (p b 0.05), while in the occipital lobe, it was 3Dmaskave, and subjected to a 4 (location) × 3 (emotion) ANOVA. not (p > 0.05). These results are illustrated in Fig. 3D/F. The impact of As expected, activity within the ‘where’ pathway ROI was signifi- emotion within each location for both regions was also examined, yield- cantly modulated by sound location (F(3, 51) = 38.435, p b 0.001), ing similar patterns of activation. Sounds coming from the medial right wherein there was an increased activity for sounds presented in the location (i.e., 22.5° right of the sagittal midline) significantly modulated contralateral hemifield relative to the ispilateral hemifield (Fig. 1C). activity in both precuneus and occipital regions across emotional cate- Contrary to our original predictions, however, there was no signifi- gories (positive > neutral, p b 0.05; positive > negative, p b 0.005; cant main effect of emotion (F(34,2) = 0.776, p > 0.05), nor was neutral > negative p b 0.005). Significant changes in activity as a func- there a significant location × emotion interaction (F(6, 102) = 1.164, tion of emotion were not found in any of the other locations (p b 0.05). p > 0.05; Fig. 1C). Thus, emotion did not modulate activity in These results are illustrated in Fig. 3D/F. The effects in the visual cortex location-sensitive areas of the auditory cortex. were unexpected as subjects had their eyes closed during the task; however, it is noteworthy that occipital lobe activity has been asso- Whole brain analysis: locating emotional sounds in auditory space ciated with mental imagery in the absence of visual input (Borst and An exploratory whole brain analysis by way of a 4 (location) × 3 Kosslyn, 2008; Kosslyn et al., 1995). The effects may therefore be due (emotion) ANOVA was conducted on the data from Task 2 to identify to the relative propensity of the different classes of sounds to induce neural regions that varied as a function of location and emotion. A sig- mental imagery. nificant main effect of location (p b 0.005; corrected at p b 0.05) was identified in regions of temporal and parietal cortices (Table 2). This Interrogating identity-sensitive regions of auditory cortex (the ‘what’ activation extended from the primary auditory cortices (BA41/42) pathway) posteriorly and medially along the transverse temporal gyrus and The whole-brain analysis (Task 2) identified a separate anterior- into the inferior parietal lobule (Fig. 2A). Percent signal change from lateral area of auditory cortex that was modulated by emotion. This re- the mean activity for each significantly activated voxel was extracted gion resembled areas implicated in the putative ‘what’ auditory pro- from all regions displaying significant main effects or interactions cessing stream (Altmann et al., 2008; Warren and Griffiths, 2003), using 3Dmaskave. Contributing to this effect, the activity in the poste- raising the question of whether emotion has dissociable effects on the rior superior temporal gyrus (posterior areas of BA 13) varied as a ‘what’ versus ‘where’ auditory processing streams. To test this hypothe- function of location; the activity was greatest to the stimuli presented sis, a follow-up localizer scan (Task 3) was performed to extract a sound in the far contralateral hemifield of the auditory virtual space, and de- identity sensitive ‘what’ pathway ROI (independent of location and creased progressively with a distance from that location (Fig. 2B). emotion) that could then be applied to the Task 2 data. A one-way This general pattern of activation emerged bilaterally, with each ANOVA (with four sound categories) was conducted on the EPI data re- hemisphere showing an inverse pattern of activation to the other. vealing object-identity sensitive areas within the temporal cortex This result is consistent with studies that describe auditory stimulus (p b 0.005; corrected at p b 0.05; Table 2). This activation included re- location encoding as involving differential activation of units across gions of cortex anterior and lateral to primary auditory cortex along the two hemispheres, as opposed to local encoding within specific the superior temporal gyrus (BA 22; p b 0.005; corrected at p b 0.05; nuclei or regions (Grothe et al., 2010; Stecker et al., 2005). Similar Fig. 4A). Further exploration of this effect revealed that a significantly patterns of activity were identified in the areas of precuneus, inferior greater activity was elicited in this area for sounds in the human, animal parietal lobule, pre/post-central gyrus and medial occipital gyrus that and machine categories compared to scrambled sounds (p b 0.001; exhibited main effects of sound location. Fig. 4B). In addition, this region showed a significantly greater activity A significant main effect of emotion was identified in the bilateral to biologically generated sounds (human and animal) relative to temporal cortex (p b 0.005; corrected at p b 0.05; Table 2). Activation machine-generated sounds (p b 0.005, p b 0.001 respectively). This 300 J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305

ROI: Location-Sensitive A B Task 1: Effect of Location Regions of Auditory Cortex ∗∗ 0.7 ∗ ∗ ∗ Task 1: Main Effect ∗ ∗ ∗ ∗∗ 0.6 ∗ ∗ ∗ ∗∗ of Location ∗∗ ∗ o 0.5 ∗∗ -90 -22.5o 0.4 22.5o 0.3 90o

0.2 % signal change

0.1

0 Left Right z = 7 R Hemisphere

‘Where’ ROI Applied to Task 2: Effect of Location ∗∗∗ ∗∗∗ ∗∗∗ C ∗∗ ∗∗∗ 0.35 ∗∗ ∗∗∗ 0.3 ∗∗∗ ∗∗∗ -90o 0.25 ∗ o 0.2 -22.5 22.5o 0.15 90o 0.1 % signal change 0.05 0 Left Right Hemisphere

Fig. 1. Neural regions significantly modulated by sound location (Task 1). (A) Activity in posterior aspects of bilateral superior temporal gyrus is significantly modulated by sound location. (B) These effects of location were parametric, wherein cortical activation increases as a sound is presented more eccentrically in the contralateral hemifield. (C) A location-sensitive ‘where’ ROI generated from Task 1 was applied to the EPI data acquired during Task 2. As expected, a significant main effect of location was identified in this re- gion (F(3, 51) = 38.435, p b 0.001). The nature of this effect was consistent with the parametric location effects reported above. Significant main effects of stimulus emotion were not found within this ROI (F(34,2) = 0.776, p > 0.05), nor were any location X emotion interaction effects (F(6, 102) = 1.164, p > 0.05). [* p b 0.01; ** p b 0.005 *** p b 0.001]. identity-related ROI was then applied to the BOLD data collected in Task during Task 2 are more closely related to the coordinates of the ‘what’ 2, and the percent signal change was extracted from each mask for all ROI compared to the ‘where’ ROIs, so this effect is not unexpected. Con- locations and emotional categories using 3Dmaskave, and subjected to junction analyses confirmed that ROIs generated from Task 1 (‘where’ a 4 (location) × 3 (emotion) ANOVA. Within the ‘what’ pathway ROI, localizer) and Task 3 (‘what’ localizer) overlapped with the main effects activity was found to be significantly modulated by both sound location of location and emotion respectively in Task 2 (see Supplementary mate-

(F(3, 51) = 26.223, p b 0.001) and emotion (F(2, 34) = 8.914, p b 0.005; rials). It should be noted that Task 3 involved a subset of the original sam- Fig. 4c). Activity in this region was significantly increased for sounds pleusedinTasks1and2,andthereforehadlesspowertodefine the ROI. presented in the contralateral hemifield when compared with sounds Nevertheless, the fact that this independently derived ROI shows the presented in the ispilateral hemifield (p b 0.001). Additionally, this re- same functional properties exhibited in the emotion-sensitive regions gion showed enhanced activation for positive and negative versus neu- uncovered in Task 2 (despite reduced power) lends further support to tral stimuli (p b 0.005 and p b 0.05 respectively). Lastly, a significant the conclusion that the auditory ‘what’ pathway is modified by both location × emotion interaction was observed in this region (F(6, 102) = emotion and location characteristics. 2.450, p b 0.05). To identify the nature of the interaction effects in the ‘what’ pathway, Discussion a series of one-way ANOVAs were performed. First, the impact of location within each emotion was examined. Within this ROI, there was a greater Considerable evidence from visual studies suggests that emotional activity for both positive and negative stimuli presented in the contralat- stimuli gain rapid and often preferential access to the brain's process- eral relative to ipsilateral hemifield (p b 0.001 and p b 0.05 respective- ing resources. Although less work has been conducted in the auditory ly); however, no such effect was observed for neutral stimuli domain, enhanced effects of emotion on auditory cortex have also (p > 0.05). The interaction was further examined by comparing the im- been observed across multiple investigative techniques, including pact of emotion within each location; however, no significant effects fMRI (Wiethoff et al., 2008) and NIRS (Plichta et al., 2011). However, were identified for this contrast (p > 0.05). This effect contrasts with essentially nothing is known about how emotion influences auditory that found in the ‘where’ pathway ROI, which featured a significant effect stimulus localization. In the current study, we used fMRI in concert of location, but no significant effect of emotion or location × emotion in- with individualized virtual auditory environments to investigate the teraction. Furthermore, the coordinates of the interaction identified effect of emotion on . Surprisingly, participants J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 301

A B Whole Brain Analysis: Location Task 2: Location ∗ ∗ ∗ ∗ ∗ ∗ (auditory cortex) Main Effect of Sound Location ∗ ∗ ∗ 0.3 ∗ ∗ ∗ ∗ ∗ ∗ ∗ o 0.25 ∗ ∗ ∗ -90 ∗ ∗ ∗ 0.2 ∗ ∗ ∗ -22.5o

0.15 22.5o

0.1 90o % signal change 0.05

0 Left Right x = 49 z = 12 R Hemisphere

C Whole Brain Analysis: Emotion D Task 2: Emotion ∗ ∗ ∗ ∗ ∗ ∗ Main Effect of Sound Emotion 0.25 ∗ ∗ ∗ ∗ ∗ ∗ 0.2 Positive Negative 0.15 Neutral

0.1 % signal change 0.05

0 x = 60 Left Right z = 3 R Hemisphere

Fig. 2. Neural regions demonstrating significant main effects of sound location and emotion in a whole brain analysis (Task 2). (A) Posterior superior temporal gyrus activity was significantly modulated bilaterally as a function of sound location. (B) These effects of location are parametric, wherein cortical activation increases as a sound is presented more eccentrically in the contralateral hemifield. (C) Anterior superior temporal gyrus activity bilaterally was significantly modulated by sound emotion. (D) These effects of emotion are driven by significantly greater activation during the presentation of positive and negative relative to neutral sounds. The whole-brain analysis was conducted at a two-tailed thresh- old of p b 0.005, and corrected to a family wise error rate of p b 0.05. [* p b 0.01; ** p b 0.005 *** p b 0.001]. were significantly slower to localize positive and negative sounds rel- Regions associated with processing auditory object location ative to neutral ones. Moreover, activity in an independently identi- fied location-sensitive region of auditory cortex was not modulated A main effect of location was observed in bilateral temporal cortex by emotion. Subsequent whole-brain analyses were conducted on a (Tasks 1 and 2), precuneus, and inferior parietal cortex (Task 2 only). task involving the localization of emotional and neutral stimuli. This Of note, the location-related effects were characterized in posterior- analysis revealed that enhanced activity to positive and negative medial auditory cortex by greatest activity to stimuli positioned fur- stimuli was observed in anterior-lateral areas of auditory cortex thest in the contralateral hemifield, decreasing activity to midline irrespective of location. In contrast, posterior-medial regions of audi- sounds, and significantly less activity to ipsilaterally positioned tory cortex, as well as the inferior parietal lobule and precuneus, were sounds. This general pattern of activation emerged bilaterally, with modulated by location, irrespective of emotion. Both the response of each hemisphere showing the inverse pattern of activation. ROIs gen- anterior-lateral regions of auditory cortex to sound location and the erated from Task 1 confirmed that independently defined location- lack of response of posterior-medial region of auditory cortex to emo- sensitive regions in STG were not modulated by emotion. tion ran contrary to original predictions. These unexpected results raised the possibility that emotional sounds augment activity in the Regions associated with processing auditory object identity anterior-lateral ‘what’ pathway, but not the posterior-medial ‘where’ pathway during auditory localization. To more clearly delin- In the current study, areas within STG were implicated in processing eate the functional significance of the divisions identified, we both the emotional and object identity features of auditory stimuli (Tasks conducted an additional functional localizer scan to independently 2 and 3). The identity-related effect identified in Task 3 was character- identify regions of auditory cortex that responded to changes in ized in anterior-lateral auditory cortex by greater activity to biological sound identity (i.e., areas associated with encoding ‘what’ in dual relative to non-biological sounds, but did not distinguish between pathway models of auditory processing; Alain et al., 2001; Altmann human and non-human sounds. This result appears to contradict previ- et al., 2008; Rauschecker and Tian, 2000; Warren and Griffiths, ous work that implicates superior temporal (STS) specifically in 2003). This functionally-derived ROI was then applied to the original human voice processing (Belin et al., 2000; Ethofer et al., 2012). One crit- study. Collectively, the results support the conclusion that whereas ical difference between these studies and the current study is the treat- sound location modulates activity within the ‘what’ and ‘where’ ment of non-vocal human sounds (i.e., sounds that do not involve functionally-derived pathways, emotion modulates neural activity in vocal fold vibration). Previous work found that contrasting vocal the ‘what’ but not the ‘where’ pathway. human sounds with non-vocal sounds yielded voice selective activity 302 J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305

A Whole Brain Analysis: B Task 2: Location X Emotion Interaction (Auditory Cortex) 0.5 -90 0.4 0.3 -22 Location 0.2 22 X 0.1 90 Emotion 0 % signal change Neutral Negative Positive x = 62 y = -13 R Emotion Whole Brain Analysis: D Task 2: Location X Emotion C Interaction (Precuneus) 0.1 0.08 -90 0.06 -22 Location 0.04 X 0.02 22 Emotion 0 90 -0.02 Neutral Negative Positive

% signal change -0.04 R Emotion x = -2 y = -57 Whole Brain Analysis: F Task 2: Location X Emotion E Interaction (Occipital Lobe) 0.1 0.08 -90 0.06 -22 Location 0.04 22 X 0.02 Emotion 0 90 -0.02 Neutral Negative Positive -0.04 % signal change Emotion R x = 11 x = -80

Fig. 3. Neural regions demonstrating a significant location X emotion interaction in a whole brain analysis (Task 2). (A) Right superior temporal gyrus was significantly modulated as a function of a location X emotion interaction. (B) These effects are driven by an increased response to emotional (both positive and negative) relative to neutral sounds when presented in the contralateral hemifield (p b 0.001 and p b 0.01 respectively). (C/E) A significant location X emotion interaction emerged in bilateral precuneus and occipital lobe. (D/F) Within both the precuneus and occipital cortex, the impact of location within each emotion was significant during the presentation of negative and positive (p b 0.005) but not neutral (p > 0.05) sounds. [* p b 0.01; ** p b 0.005 *** p b 0.001]. of the STS (Belin et al., 2000). Other studies yielding similar effects in significant location X emotion interaction. Activity in this region was STS contrasted human vocal sounds (human non-vocal sounds were ex- modulated by location for positive and negative stimuli, but not neu- cluded) with both animal and environmental sounds (Ethofer et al., tral stimuli. Furthermore, these regions showed increased activity to 2012). The current study, however, classed all sound originating from a positive and negative relative to neutral stimuli, but only for those human source into the ‘human’ category (e.g., clapping and crying stimuli presented in the contralateral hemifield. Thus, these areas were both ‘human’). This regressor was designed to be an easily distin- show some evidence of location-related encoding for emotional, but guishable auditory class, and not to represent neural changes associated not neutral stimuli. These findings raise the intriguing possibility with vocal processes. As such, it included one vocal and two non-vocal that anterior-lateral regions of right auditory cortex may be involved stimuli to eliminate any bias toward the identification of regions in integrating information about object identity and location for emo- displaying voice-specific activation. The neural region observed in tionally salient stimuli. It also calls into question strict boundaries be- the present study may be located in an area slightly superior to that im- tween spatial and object identity encoding in human STG. Further plicated in prior studies of human vocalizations. Interestingly, the use of work will be required to delineate the underlying neuroanatomy and linguistic utterances may also impact the location of emotion-sensitive the stimulus parameters associated with these different processes in regions. For example, Ethofer et al. (2012) demonstrate that an area of humans. STG identified as selective for human voices is also sensitive to the emo- tional inflection of pseudo-linguistic stimuli. This region, while Representation of emotion in a dual pathway model of audition overlapping with the emotion-sensitive areas in the present study, ex- tends into more posterior-lateral areas of temporal cortex. In future Considerable evidence exists supporting the suggestion that the vi- work, it will be interesting to determine whether verbal and non- sual system is comprised of separable ventral and dorsal pathways re- verbal emotional sounds augment activity in dissociable areas of tempo- sponsible for processing objects for identity and action respectively ral cortex. (Milner and Goodale, 1998). It has also recently been proposed that au- ditory cortices can be separated into similar ‘what’ and ‘where’ path- Regions associated with processing both auditory object identity and ways (Alain et al., 2001; Lomber and Malhotra, 2008; Rauschecker, location 2012). In the present study, a functional dissociation between anterior-lateral and posterior-medial divisions of auditory cortices In addition to showing an effect of object identity, areas of right was observed that resembled the boundaries described in some dual anterior-lateral STG also showed a significant effect of location, and a pathway models of auditory processing (Bushara et al., 1999; Lomber J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 303

ROI: Identity-Sensitive A B Task 3: Effect of Identity Regions of Auditory Cortex ∗∗∗ ∗∗ Task 3: ∗∗∗ Main Effect 1.4 ∗ of Identity 1.2 ∗∗∗ 1 Animal Human 0.8 Machine 0.6

% signal change White Noise 0.4

0.2

0 Right Hemisphere z = 10 R

C ‘What’ ROI Applied to Task 2: Location X Emotion 0.4 0.35 0.3 -90 -22.5 0.25

0.2 22.5 90 0.15 0.1 % signal change 0.05 0 Positive Negative Neutral Axis Title

Fig. 4. Neural regions significantly modulated by sound identity (Task 3). (A) Activity in bilateral superior temporal gyrus was significantly modulated by sound identity. (B) These effects of identity were driven by a significant increase in activation during the presentation of ecologically relevant sounds relative to unrecognizable scrambled sounds and a sig- nificant increase to biological relative to non-biological sounds. The whole-brain analysis was conducted at a two-tailed threshold of p b 0.005, and corrected to a family wise error rate of p b 0.05. (C) A sound identity-sensitive (‘what’) ROI derived from Task 3 was applied to the EPI data from Task 2. A significant main effect of sound location was identified in this region (F(3, 51) = 26.223, p b 0.001). This region displayed a significant main effect of emotion during (F(2, 34) = 8.914, p b 0.005) whereby positive and negative stimuli elic- ited significantly greater activation relative to neutral stimuli (p b 0.005 and p b 0.05 respectively). Additionally, the ‘what’ pathway ROI displayed a significant location X emotion interaction effect (F(6, 102) = 2.450, p b 0.05) wherein there was greater activity for both positive and negative stimuli presented in the contralateral relative to ipsilateral hemifield (p b 0.001 and p b 0.05 respectively). No significant effect of location was observed for neutral stimuli. [* p b 0.01; ** p b 0.005 *** p b 0.001]. and Malhotra, 2008; Warren and Griffiths, 2003). To further explore this et al., 2000) and emotional interference (Ngan Ta et al., 2010). Another possibility, results from Tasks 1 and 3 were used to independently gen- possibility is that, unlike in the , both object perception erate functionally defined ROIs corresponding to these pathways. This and localization in the auditory domain may be sensitive to processing analysis revealed that whereas sound location modulates neural activa- load. In the current study, emotion may have augmented the represen- tion within both the ‘what’ and ‘where’ auditory processing pathways, tation of object features at the expense of spatial-cues, thereby slowing emotion modulates activity in the ‘what’ but not the ‘where’ processing localization performance. Consistent with this interpretation, we ob- pathway. Importantly, the location X emotion interaction observed in served evidence that both spatial and emotional characteristics were in- the anterior-lateral ‘what’ pathway (but not the posterior-medial tegrated in anterior-lateral areas of auditory cortex, making this a ‘where’ pathway) showed that spatial processing in the ‘what’ pathway potential site of competition between stimulus features. This integra- was present specifically for emotional stimuli. tion of object and spatial features within the putative ‘what’ auditory area raises questions about the extent to which the ‘what’ and ‘where’ Effects of emotion on localization behavior streams of the are functionally segregated. It is important to note that the precise relationship between emo- It would seem advantageous for organisms to rapidly and accurately tion and the putative ‘what/where’ pathways of the auditory system localize the source of emotional auditory stimuli in the environment. was only preliminarily addressed in the current study. First, the na- Contrary to expectations, participants in the present study were slower ture and anatomical mapping of the putative dual processing stream to localize positive and negative sounds relative to neutral ones; so why in the auditory system remains unclear, as is the degree to which its might emotional stimuli be associated with slower localization perfor- function resembles that of the visual system. In addition, the dissocia- mance in the present study? One possibility is that important differ- ble effects of emotional versus neutral stimuli within regions of audi- ences exist between the dual pathways of the visual and auditory tory cortex observed were unexpected, and the interpretation is systems. Our finding that participants were significantly slower to local- based in part on ROIs generated from follow-up scans involving a ize emotional relative to neutral sounds conflicts with evidence from smaller sample. Although generated from activity in a sample smaller the visual domain suggesting that dorsal stream guided behaviors are than the initial population, this ROI displayed similar functional prop- less susceptible to processing limitations (Fecteau et al., 2000; Pisella erties to the regions identified in the original task, providing 304 J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 cross-validation for our original interpretation. Nevertheless, in fu- Barrett, D.J., Hall, D.A., 2006. Response preferences for “what” and “where” in human – fi non-primary auditory cortex. Neuroimage 32, 968 977. ture work it would be bene cial to precisely map the boundaries of Belin, P., Zatorre, R.J., Lafaille, P., Ahad, P., Pike, B., 2000. Voice-selective areas in human the ‘what/where’ auditory processing streams, and establish the im- auditory cortex. Nature 403, 309–312. pact of emotion on each. Bohil, C.J., Alicea, B., Biocca, F.A., 2011. Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12, 752–762. Borst, G., Kosslyn, S.M., 2008. Visual mental imagery and : structural Conclusions equivalence revealed by scanning processes. Mem. Cognit. 36, 849–862. Bradley, M.M., Lang, P.J., 1999. International Affective Digitized Sounds (IADS): Stimuli, Instruction Manual and Affective Ratings (Tech. Rep. No. B-2). Center for Research Although the emotional content of a sound has been demonstrat- in Psychophysiology, University of Florida Gainesville, Florida. ed to influence processing in auditory cortex, the neural and behav- Bushara, K.O., Weeks, R.A., Ishii, K., Catalan, M.J., Tian, B., Rauschecker, J.P., Hallett, M., 1999. Modality-specific frontal and parietal areas for auditory and visual spatial lo- ioral effects of emotion on stimulus localization was previously calization in humans. Nat. Neurosci. 2, 759–766. unknown. Our results indicate that during sound localization, the Clarke, S., Bellmann Thiran, A., Maeder, P., Adriani, M., Vernet, O., Regli, L., Cuisenaire, emotional content of a sound modulates activity in anterior-lateral O., Thiran, J.P., 2002. What and where in human audition: selective deficits follow- – regions of STG (areas corresponding to the putative ‘what’ pathway ing focal hemispheric lesions. Exp. Brain Res. 147, 8 15. Cox, R.W., 1996. AFNI: software for analysis and visualization of functional magnetic of auditory processing). In contrast, and contrary to predictions, emo- neuroimages. Comput. Biomed. Res. 29, 162–173. tion elicited no significant changes in neural activity to posterior- Esterman, M., Tamber-Rosenau, B.J., Chiu, Y.C., Yantis, S., 2010. Avoiding non- medial areas of STG (regions corresponding to the putative ‘where’ independence in fMRI data analysis: leave one subject out. Neuroimage 50, 572–576. pathway). An unexpected interaction between emotion and location Ethofer, T., Bretscher, J., Gschwind, M., Kreifelts, B., Wildgruber, D., Vuilleumier, P., was also observed in anterior-lateral areas of STG suggesting that 2012. Emotional voice areas: anatomic location, functional properties, and – the boundaries between object identity and location decoding may structural connections revealed by combined fMRI/DTI. Cereb. Cortex 22, 191 200. Fecteau, S., Belin, P., Joanette, Y., Armony, J.L., 2007. responses to be blurred; emotional sounds were associated with enhanced spatial nonlinguistic emotional vocalizations. Neuroimage 36, 480–487. processing in the ‘what’ pathway despite having no effect on the Fecteau, J.H., Enns, J.T., Kingstone, A., 2000. Competition-induced visual field differ- ‘where’ pathway. It is important to note that at the behavioral level, ences in search. Psychol. Sci. 11, 386–393. fi Fujiki, N., Riederer, K.A., Jousmaki, V., Makela, J.P., Hari, R., 2002. Human cortical repre- a signi cant delay in localization of emotional compared to neutral sentation of virtual auditory space: differences between sound azimuth and eleva- sounds was also observed. The idea was raised that this effect may tion. Eur. J. Neurosci. 16, 2207–2213. have been driven by competition between spatial and emotional fea- Goydke, K.N., Altenmuller, E., Moller, J., Munte, T.F., 2004. Changes in emotional tone and instrumental timbre are reflected by the . Brain Res. tures for representation and control over behavior. Interestingly, Cogn. Brain Res. 21, 351–359. anterior-lateral auditory cortex activity was modulated by both spa- Graves, R., Landis, T., Goodglass, H., 1981. Laterality and sex differences for tial location and emotion, suggesting this area as a potential site for visual recognition of emotional and non-emotional words. Neuropsychologia 19, – such competitive interactions. Finally, this work demonstrates, for 95 102. Grothe, B., Pecka, M., McAlpine, D., 2010. Mechanisms of sound localization in mam- the first time, the feasibility of utilizing individualized virtual auditory mals. Physiol. Rev. 90, 983–1012. environments as a stimulus presentation tool during fMRI. This tech- Haxby, J.V., Grady, C.L., Horwitz, B., Ungerleider, L.G., Mishkin, M., Carson, R.E., nique, which involves presenting auditory stimuli through head- Herscovitch, P., Schapiro, M.B., Rapoport, S.I., 1991. Dissociation of object and spa- tial pathways in human extrastriate cortex. Proc. Natl. Acad. Sci. phones while maintaining spatial integrity and perceptual realism, U. S. A. 88, 1621–1625. holds promise for future neuroimaging studies investigating the spa- Kosslyn, S.M., Thompson, W.L., Kim, I.J., Alpert, N.M., 1995. Topographical representa- – tial component of sound. tions of mental images in primary visual cortex. Nature 378, 496 498. Krumbholz, K., Nobis, E.A., Weatheritt, R.J., Fink, G.R., 2009. Executive control of spatial Supplementary data to this article can be found online at http:// attention shifts in the auditory compared to the visual modality. Hum. Brain Mapp. dx.doi.org/10.1016/j.neuroimage.2013.05.051. 30, 1457–1469. Lang, P.J., Davis, M., 2006. Emotion, motivation, and the brain: reflex foundations in an- imal and human research. Prog. Brain Res. 156, 3–29. Acknowledgments Langendijk, E.H., Bronkhorst, A.W., 2000. Fidelity of three-dimensional-sound repro- duction using a virtual auditory display. J. Acoust. Soc. Am. 107, 528–537. Lomber, S.G., Malhotra, S., 2008. Double dissociation of ‘what’ and ‘where’ processing in The authors would like to thank Dr. Marc Joanisse and Dr. Stefan auditory cortex. Nat. Neurosci. 11, 609–616. Everling for helpful discussion on experimental design and analysis, Lomber, S.G., Malhotra, S., Hall, A.J., 2007. Functional specialization in non-primary au- and David Grainger, of the National Centre for Audiology, for techni- ditory cortex of the cat: areal and laminar contributions to sound localization. Hear. Res. 229, 31–45. cal support. This work was funded by a Natural Science and Engineer- Mathiak, K., Menning, H., Hertrich, I., Mathiak, K.A., Zvyagintsev, M., Ackermann, H., ing Research Council Discovery Grant to Dr. Derek Mitchell. 2007. Who is telling what from where? A functional magnetic resonance imaging study. Neuroreport 18, 405–409. Middlebrooks, J.C., Macpherson, E.A., Onsan, Z.A., 2000. Psychophysical customization Conflict of interest of directional transfer functions for virtual sound localization. J. Acoust. Soc. Am. 108, 3088–3091. fl Milner, A.D., Goodale, M.A., 1993. Visual pathways to perception and action. Prog. Brain The authors have no con ict of interest regarding the current Res. 95, 317–337. experiment. Milner, A.D., Goodale, M.A., 1998. The Visual Brain in Action. Oxford University Press. Mitchell, D.G., Greening, S.G., 2012. Conscious perception of emotional stimuli: brain mechanisms. Neuroscientist 18, 386–398. References Mitchell, D.G., Luo, Q., Mondillo, K., Vythilingam, M., Finger, E.C., Blair, R.J., 2008. The in- terference of operant task performance by emotional distracters: an antagonistic Adolphs, R., 2008. Fear, faces, and the human amygdala. Curr. Opin. Neurobiol. 18, relationship between the amygdala and frontoparietal cortices. Neuroimage 40, 166–172. 859–868. Ahveninen, J., Jaaskelainen, I.P., Raij, T., Bonmassar, G., Devore, S., Hamalainen, M., Morris, J.S., Friston, K.J., Buchel, C., Frith, C.D., Young, A.W., Calder, A.J., Dolan, R.J., 1998. Levanen, S., Lin, F.H., Sams, M., Shinn-Cunningham, B.G., Witzel, T., Belliveau, A neuromodulatory role for the human amygdala in processing emotional facial J.W., 2006. Task-modulated “what” and “where” pathways in human auditory cor- expressions. Brain 121 (Pt 1), 47–57. tex. Proc. Natl. Acad. Sci. U.S.A. 103, 14608–14613. Ngan Ta, K.L., Liu, G., Brennan, A.A., Enns, J.T., 2010. Spider-phobia influences conscious, Alain, C., Arnott, S.R., Hevenor, S., Graham, S., Grady, C.L., 2001. “What” and “where” in but not unconscious, control of visually guided action. J. Vis. 10, 1069. the human auditory system. Proc. Natl. Acad. Sci. U. S. A. 98, 12301–12306. Pessoa, L., Ungerleider, L.G., 2004. Neuroimaging studies of attention and the process- Altmann, C.F., Henning, M., Doring, M.K., Kaiser, J., 2008. Effects of feature-selective at- ing of emotion-laden stimuli. Prog. Brain Res. 144, 171–182. tention on auditory pattern and location processing. Neuroimage 41, 69–79. Pisella, L., Grea, H., Tilikete, C., Vighetto, A., Desmurget, M., Rode, G., Boisson, D., Amting, J.M., Greening, S.G., Mitchell, D.G., 2010. Multiple mechanisms of Rossetti, Y., 2000. An 'automatic pilot' for the hand in human posterior parietal cor- consciousness: the neural correlates of emotional awareness. J. Neurosci. 30, tex: toward reinterpreting optic ataxia. Nat. Neurosci. 3, 729–736. 10039–10047. Plichta, M.M., Gerdes, A.B., Alpers, G.W., Harnisch, W., Brill, S., Wieser, M.J., Fallgatter, Arnott, S.R., Binns, M.A., Grady, C.L., Alain, C., 2004. Assessing the auditory dual- A.J., 2011. Auditory cortex activation is modulated by emotion: a functional near- pathway model in humans. Neuroimage 22, 401–408. infrared spectroscopy (fNIRS) study. Neuroimage 55, 1200–1207. J.H. Kryklywy et al. / NeuroImage 82 (2013) 295–305 305

Rauschecker, J.P., 2012. Ventral and dorsal streams in the evolution of and lan- Vuilleumier, P., Driver, J., 2007. Modulation of visual processing by attention and guage. Front. Evol. Neurosci. 4, 7. emotion: windows on causal interactions between regions. Philos. Rauschecker, J.P., Tian, B., 2000. Mechanisms and streams for processing of “what” and Trans. R. Soc. Lond. B Biol. Sci. 362, 837–855. “where” in auditory cortex. Proc. Natl. Acad. Sci. U. S. A. 97, 11800–11806. Vul, E., Kanwisher, N., 2010. Begging the Question: The Non-independence error in Rife, D.D., Vanderkooy, J., 1989. Transfer-function measurement with maximum-length fMRI Data Analysis. sequences. J. Audio Eng. Soc. 37, 419–444. Vul, E., Pashler, H., 2012. Voodoo and circularity errors. Neuroimage 62, 945–948. Sauter, D.A., Eimer, M., 2009. Rapid detection of emotion from human vocalizations. J. Warren, J.D., Griffiths, T.D., 2003. Distinct mechanisms for processing spatial sequences Cogn. Neurosci. 22, 474–481. and pitch sequences in the human auditory brain. J. Neurosci. 23, 5799–5804. Stecker, G.C., Harrington, I.A., Middlebrooks, J.C., 2005. Location coding by opponent Wenzel, E., Arruda, M., Kistler, D.J., Wightman, F.L., 1993. Localization using non- neural populations in the auditory cortex. PLoS Biol. 3, e78. individualized head-related transfer functions. J. Acoust. Soc. Am. 94, 111–123. Vastfjall, D., 2003. The subjective of presence, emotion recognition, and experi- Wiethoff, S., Wildgruber, D., Kreifelts, B., Becker, H., Herbert, C., Grodd, W., Ethofer, T., enced emotions in auditory virtual environments. Cyberpsychol. Behav. 6, 2008. Cerebral processing of emotional prosody — influence of acoustic parameters 181–188. and arousal. Neuroimage 39, 885–893. Viinikainen, M., Katsyri, J., Sams, M., 2012. Representation of perceived sound valence Wightman, F.L., Kistler, D.J., 1989a. Headphone simulation of free-field listening I: stim- in the human brain. Hum. Brain Mapp. 33, 2295–2305. ulus synthesis. J. Acoust. Soc. Am. 85, 858–867. Vuilleumier, P., 2005. How brains beware: neural mechanisms of emotional attention. Wightman, F.L., Kistler, D.J., 1989b. Headphone simulation of free-field listening II: psy- Trends Cogn. Sci. 9, 585–594. chophysical validation. J. Acoust. Soc. Am. 85, 868–878.