Neurodynamics and Connectivity During Facial Fear Perception
Total Page:16
File Type:pdf, Size:1020Kb
www.nature.com/scientificreports OPEN Neurodynamics and connectivity during facial fear perception: The role of threat exposure and signal Received: 23 June 2017 Accepted: 9 January 2018 congruity Published: xx xx xxxx Cody A. Cushing1, Hee Yeon Im1,2, Reginald B. Adams Jr.3, Noreen Ward1, Daniel N. Albohn3, Troy G. Steiner3 & Kestutis Kveraga 1,2 Fearful faces convey threat cues whose meaning is contextualized by eye gaze: While averted gaze is congruent with facial fear (both signal avoidance), direct gaze (an approach signal) is incongruent with it. We have previously shown using fMRI that the amygdala is engaged more strongly by fear with averted gaze during brief exposures. However, the amygdala also responds more to fear with direct gaze during longer exposures. Here we examined previously unexplored brain oscillatory responses to characterize the neurodynamics and connectivity during brief (~250 ms) and longer (~883 ms) exposures of fearful faces with direct or averted eye gaze. We performed two experiments: one replicating the exposure time by gaze direction interaction in fMRI (N = 23), and another where we confrmed greater early phase locking to averted-gaze fear (congruent threat signal) with MEG (N = 60) in a network of face processing regions, regardless of exposure duration. Phase locking to direct-gaze fear (incongruent threat signal) then increased signifcantly for brief exposures at ~350 ms, and at ~700 ms for longer exposures. Our results characterize the stages of congruent and incongruent facial threat signal processing and show that stimulus exposure strongly afects the onset and duration of these stages. When we look at a face, we can glean a wealth of information, such as age, sex, health, afective state, and atten- tional focus. Te latter two signals are typically, but not exclusively, carried by emotional expression and eye gaze direction, respectively. Depending on the emotional expression and gaze, we can recognize how happy, angry, or fearful a person is, and infer the source or target of that emotion1. In an initial examination of the interaction between eye gaze and facial emotion, Adams and colleagues introduced the “shared signal hypoth- esis”2–4. Tis hypothesis predicts that there is facilitation of afective processing for combinations of emotional expression and eye gaze that share a congruent, matching signal for approach-avoidance behavior. In support of this hypothesis, using speeded reaction time tasks and self-reported intensity of emotion perceived, Adams and Kleck3,4 found that direct gaze facilitated processing efciency and accuracy, and increased the perceived emotional intensity of approach-oriented emotions (e.g., anger and joy). Conversely, averted gaze facilitated per- ception of avoidance-oriented emotions (e.g., fear and sadness). Several other groups have now also found similar results, including a replication by Sander et al.5 using dynamic threat displays, another using a difusion model of decision making and reaction time6, and another examining efects on refexive orienting to threat7. Perhaps the most compelling behavioral replication of this efect was a study by Milders and colleagues8, who found that direct-gaze anger and averted-gaze fear were detected more readily in an attentional blink paradigm compared to averted-gaze anger and direct-gaze fear, suggesting that congruent pairings (i.e., shared signals) of gaze and emo- tion attract more preconscious attentional awareness. Similar interaction efects have been found at the neural level as well, including in several fMRI studies looking at amygdala responses to diferent gaze directions by threat displays, including our own (see too9–11). Some have also suggested that gaze influences the ambiguity surrounding the source of threat. Whalen and colleagues, for instance, hypothesized that amygdala activation is directly proportional to the amount of 1Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA. 2Department of Radiology, Harvard Medical School, Boston, MA, USA. 3Department of Psychology, The Pennsylvania State University, University Park, PA, USA. Correspondence and requests for materials should be addressed to K.K. (email: [email protected]) SCIENTIFIC REPORTS | (2018) 8:2776 | DOI:10.1038/s41598-018-20509-8 1 www.nature.com/scientificreports/ ambiguity that surrounds the source of a perceived threat12, which suggests that direct-gaze fear is a more ambig- uous signal than direct-gaze anger. Anger signals both the source of the threat and where it is directed. In the case of fear, the observer knows there is a threat, but direct eye gaze does not indicate the source of the threat - unless it is the observer. Tus, averted eye gaze is more informative in resolving the source of threat for fear, and direct gaze is more informative for anger. In both of these accounts of threat-related ambiguity (shared signals and source of threat detection), direct-gaze fear is considered a more ambiguous combination of cues than averted-gaze fear. Initial eforts to study the neural underpinnings of the perception of these compound threat cues revealed greater amygdala activation in response to incongruent compound threat cues, specifcally fearful faces with a direct gaze and anger faces with averted gaze13,14. Some follow-up studies, however, have found the opposite inter- action: fear with averted eye gaze evoked higher amygdala activation9,10. To address this, Adams et al.15 proposed that presentation duration might help explain these diferences. We hypothesized that brief presentations trigger more refexive processing, which is thought to be preferentially tuned to congruent threat cues (averted-gaze fear), and longer presentations engage more refective processing of the less salient, incongruent threat cues (direct-gaze fear, see e.g.16–18 for discussions of refexive vs. refective processing). Indeed, previous studies fnding stronger amygdala activation in response to averted-gaze fear used relatively brief stimulus exposure times (e.g.9 used 300 ms stimulus durations), whereas those reporting higher amygdala activation in response to direct-gaze fear had longer stimulus exposure times (e.g.,14,19 used 2 s and 1 s stimulus durations, respectively). Adams et al.15 put this hypothesis to the test in the context of three studies using fMRI, with varying presentation parameters during a constant 1.5 s trial. In a direct comparison, this work revealed that amygdala responses were enhanced for fear- ful faces coupled with averted gaze, when rapidly presented (300 ms), and to fearful faces coupled with direct eye gaze, when presented for a sustained duration (1 s). The Current Work The primary goal of the present study was to elucidate the fine-grained neural dynamics of this previously observed response by replicating and extending this work using magnetoencephalography (MEG) to record neu- ral activity in response to brief (250 ms) and longer (883 ms) presentations of fearful faces with direct and averted gaze. MEG allows us to elucidate not only the temporal evolution of neural activity, but also frequency-specifc oscillatory activity in response to the stimulus, including highly temporally resolved interregional connectivity patterns during perception of these compound threat cues from the face. We utilized source localization to obtain good spatial resolution to identify the temporally sensitive contributions of key brain regions in the extended face processing network: the Fusiform Face Area (FFA), Periamygdaloid Cortex (PAC), posterior superior temporal sulcus (pSTS), and orbitofrontal cortex (OFC), as well as the earliest cortical visual region V1. Tese regions are well known to be involved in either face perception and social communication, or gaze perception, if not both (e.g.,20–24). STS in general has been shown to be sensitive to gaze25, as has the fusiform gyrus26. In addition, both have been shown to be sensitive to facial expression27. Te STS and OFC are also implicated as nodes in the proposed “social brain” consisting of amygdala, OFC, and STS28. Te posterior portion of STS also has been implicated as specializing in inferring intentionality from social cues29,30. Whether or not amygdala activity can be source localized from MEG data is actively debated in the MEG literature. However, there is accumulating evi- dence now that MEG activity can indeed be localized to the subcortical nuclei of the amygdala31–35, but support- ing or advancing this claim is not the goal of our manuscript. Periamygdaloid cortex (PAC) is heavily involved in conveying inputs and outputs of the deeper amygdala nuclei, the contralateral PAC, as well many other cortical regions36,37. While we cannot be certain which of the amygdala nuclei the activity is coming from (a situation similar to all but the highest resolution fMRI studies), given the reliable activation of the amygdala in all of the previous studies using this paradigm it is probable that at least some of the activity may arise in the subcortical nuclei of the amygdala. To truly understand how a network of brain regions responds to a given task, it is necessary to not only look at the response of each region individually, but to also examine how the regions interact38. Tus, we sought to characterize the phase locking between regions in the present work as a measure of functional connectivity within our extended face processing network. Tis approach allows us to build a more complete picture of the neurody- namics at