Neural Processing of Facial Expressions As Modulators of Communicative Intention Facial Expression in Communicative Intention
Total Page:16
File Type:pdf, Size:1020Kb
Neural processing of facial expressions as modulators of communicative intention Facial expression in communicative intention Rasgado-Toledo Jalil1, Valles-Capetillo Elizabeth1, Giudicessi Averi2, Giordano Magda1*. 1Departament of Behavioral and Cognitive Neurobiology, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Blvd. Juriquilla 3001, Juriquilla, Querétaro, 76230 México 2 Department of Psychiatry, University of California, San Diego 9500 Gilman Drive La Jolla, CA * Corresponding author: E-mail address: [email protected] (Giordano, Magda) Number of pages: 31 Number of figures: 8 Number of tables: 1 Number of words: Abstract: 250 Introduction: 621 Discussion: 1499 Conflict of Interest The authors declare no competing financial interests Acknowledgments We thank Drs. Erick Pasaye, Leopoldo González-Santos, Juan Ortiz, and the personnel at the National Laboratory for Magnetic Resonance Imaging (LANIREM UNAM), for their valuable assistance. This work received support from Luis Aguilar, Alejandro De León, Carlos Flores, and Jair García of the Laboratorio Nacional de Visualización Científica Avanzada (LAVIS). Additional thanks to Azalea Reyes-Aguilar for the valuable feedback for the entire study process. Significance Statement In the present study, we tested whether previous observation of a facial expression associated with an emotion can modify the interpretation of a speech act that follows a facial expression; thus providing empirical evidence that facial expression impacts communication. Our results could help to understand how we communicate and the aspects of communication that are necessary to comprehend pragmatic forms of language as speech acts. We applied both multivariate and univariate analysis models to compare brain structures involved in speech acts comprehension, and found that the pattern of hemodynamic response of the frontal gyrus and medial prefrontal cortex can be used to decode classification decisions. We highlight the importance of facial expression as a relevant contextual clue for pragmatic language. Abstract Speakers use a variety of contextual information, such as facial emotional expressions for the successful transmission of their message. Listeners must decipher the meaning by understanding the intention behind it (Recanati, 1986). A traditional approach to the study of communicative intention has been through speech acts (Escandell, 2006). The objective of the present study is to further the understanding of the influence of facial expression to the recognition of communicative intention. The study sought to: verify the reliability of facial expressions recognition, find if there is an association between a facial expression and a category of speech acts, test if words contain an intentional load independent of the facial expression presented, and test whether facial expressions can modify an utterance’s communicative intention and the neural correlates associated using univariate and multivariate approaches. We found that previous observation of facial expressions associated with emotions can modify the interpretation of an assertive utterance that followed the facial expression. The hemodynamic brain response to an assertive utterance was moderated by the preceding facial expression and that classification based on the emotions expressed by the facial expression could be decoded by fluctuations in the brain’s hemodynamic response during the presentation of the assertive utterance. Neuroimaging data showed activation of regions involved in language, intentionality and face recognition during the utterance’s reading. Our results indicate that facial expression is a relevant contextual cue that decodes the intention of an utterance, and during decoding it engages different brain regions in agreement with the emotion expressed. key words: communicative intention, speech acts, face expression, pragmatic language, functional magnetic resonance imaging. 2 Introduction Language-communication has been described as the process by which distinct sub-processes converge and mental representations of phonological stimuli lead to recognition of meanings, which depend on our understanding of the words themselves and of the speaker’s intention. Assumptions and expectations are provided partially by the contextual frame, and are used to interpret the utterance at the lexical, grammatical and pragmatic levels (Friederici, 1999; Yule, 2010). For this reason, it is important to consider the circumstances in which the words are produced, that is, who uses them, when, and with what intention (Reyes, 1995). Intention recognition, defined as the motive for an action in order to produce an outcome, is one of the main linguistic processes necessary to decode language and its pragmatic forms such as speech acts (Holtgraves, 1986; Catmur, 2015). These pragmatic forms require that the speaker use contextual cues, including shared knowledge, gestures, tone and facial expressions. The receiver must decode the message by retrieving the literal (semantic) meaning, and using inferential processes that take into account the beliefs, attitudes, emotions and mental state of the speaker, also known as theory of mind (Van Dijk, 1977; Frith and Frith, 2006; Reyes-Aguilar et al., 2018). Among the contextual cues, facial expression has been described as an one of the most important stimuli that helps detect communicative intention due to the ease of emotion extraction, a skill that begins to develop early on, through socialization (Calder and Young, 2005; Cleveland et al., 2006; Hehman et al., 2015). Previously, Domaneschi et al., (2017) had reported an association between action-units (AU) on the upper face, and utterance defined as speech acts. In particular, there appeared to be specific combinations of AU on the upper face that can provide non-verbal cues and contribute to the interpretation of speech acts (SA) in certain instances. Briefly, SA may be defined as communicative acts through which speakers achieve something in a specific context, such as promise, thank, and order. They can be divided into three acts, of which the illocutionary act involves the intention of the speaker (Oishi, 2006; Lee, 2016; Domaneschi et al., 2017; Licea-Haquet et al., 2019). Neurocognitive mechanisms underlying communicative intention in language engage several brain regions. Prior imaging studies have described the participation of the core language network, theory of mind, mirror neuron system and a common neural network related to communicative intent (Reyes-Aguilar et al., 2018). The intentional network that includes the 3 precuneus (Pcu), superior temporal sulcus (STS), parieto-temporal junction (TPj) and inferior frontal gyrus (IFG), supports the decoding of dissimilar meanings even from the same utterance within a different context (Enrici et al., 2011; Bosco et al., 2017; Tettamanti et al., 2017; Schütz et al., 2020). The purposes of this study were, first, to find out if a sensory input, such as facial expression, can modify the interpretation of an assertive utterance, and thus its communicative intention. Second if the hemodynamic brain response (BOLD) to an assertive utterance, was similarly modulated by the previous exposure to a particular facial expression. To achieve this, we used a series of classification tasks to test if there was a consistent relation between categories of SA and facial expressions. First, we verified that participants could reliably and consistently recognize facial expressions, and we tested if recognition was affected by the sex or ethnicity of the facial stimuli. Second, we evaluated if participants showed a priori an association of a facial expression with a category of SA. We also tested if words per se contained an intentional load independent of the facial expression presented. Third, we evaluated whether facial expressions could modify an utterance’s communicative intention. Finally, we evaluated the hemodynamic brain response to the assertive utterances after previous exposure to different facial expressions. Experimental procedures Experiment 1 (Facial expression recognition) To evaluate the recognition of facial expressions in an adult Mexican population, we used three databases (see below for details) and evaluated the stimuli using an online survey (Google Forms, SFigure 1, supplementary material). We selected the database with the highest scores for subsequent experiments. Participants. The survey was answered by 52 Mexican participants with Spanish as their native language (mean age 23.67 ± 3.28; range: 18 - 63 years old; 21 women). Selection of facial expressions databases. After a wide online search, we chose three databases that included facial expressions that represented each of the six basic emotions (according to Ekman, 1970) and a neutral face, with actors of Caucasian and/or Latin American ethnicity, both males and females. These were the Montreal Set of Facial Displays of Emotion or MSDF (Beaupré and Hess, 2005), Set of Emotional Facial Expression Pictures 4 or WSEFEP (Olszanowski et al., 2015), and Compound Facial Expressions of Emotion or CFEE (Du et al., 2014). Procedure. From each database we selected twelve actors, male and female, of Latin American ethnicity. Participants were asked to select which facial expression (joy, anger, sadness, disgust, surprise, fear and neutral) corresponded to one of three emotions. The question asked was: Which of these images expresses anger/joy/neutrality? Participants were also asked if they considered that the actor belonged