Identification of Emotion in a Dichotic Listening Task: Event-Related Brain
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Linguistic Processing of Task-Irrelevant Speech at a Cocktail Party Paz Har-Shai Yahav*, Elana Zion Golumbic*
RESEARCH ARTICLE Linguistic processing of task-irrelevant speech at a cocktail party Paz Har-shai Yahav*, Elana Zion Golumbic* The Gonda Center for Multidisciplinary Brain Research, Bar Ilan University, Ramat Gan, Israel Abstract Paying attention to one speaker in a noisy place can be extremely difficult, because to- be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted to acoustic-phonetic interference or if it extends to competition for linguistic processing as well. Neural activity was recorded using Magnetoencephalography as human participants were instructed to attend to natural speech presented to one ear, and task- irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables, or syllables structured to form coherent sentences, using hierarchical frequency-tagging. We find that the phrasal structure of structured task-irrelevant stimuli was represented in the neural response in left inferior frontal and posterior parietal regions, indicating that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Additionally, neural tracking of to-be-attended speech in left inferior frontal regions was enhanced when competing with structured task-irrelevant stimuli, suggesting inherent competition between them for linguistic processing. Introduction *For correspondence: The seminal speech-shadowing experiments conducted in the 50 s and 60 s set the stage for study- [email protected] (PH-Y); ing one of the primary cognitive challenges encountered in daily life: how do our perceptual and lin- [email protected] (EZG) guistic systems deal effectively with competing speech inputs? (Cherry, 1953; Broadbent, 1958; Treisman, 1960). -
CROSS LINGUISTIC INTERPRETATION of EMOTIONAL PROSODY Åsa Abelin, Jens Allwood Department of Linguistics, Göteborg University
CROSS LINGUISTIC INTERPRETATION OF EMOTIONAL PROSODY Åsa Abelin, Jens Allwood Department of Linguistics, Göteborg University for which emotions are the primary, see e.g. Woodworth (1938), ABSTRACT Izard (1971), Roseman (1979). In the present study we have chosen some of the most commonly discussed emotions and This study has three purposes: the first is to study if there is any attitudes, “joy”, “surprise”, “sadness”, “fear”, “shyness”, stability in the way we interpret different emotions and attitudes “anger”, “dominance” and “disgust” (in English translation) in from prosodic patterns, the second is to see if this interpretation order to cover several types. is dependent on the listeners cultural and linguistic background, and the third is to find out if there is any reoccurring relation 2. METHOD between acoustic and semantic properties of the stimuli. Recordings of a Swedish speaker uttering a phrase while 2.1. Speech material and recording expressing different emotions was interpreted by listeners with In order to isolate the contribution of prosdy to the different L1:s, Swedish, English, Finnish and Spanish, who interpretation of emotions and attitudes we chose one carrier were to judge the emotional contents of the expressions. phrase in which the different emotions were to be expressed. The results show that some emotions are interpreted in The carrier phrase was salt sill, potatismos och pannkakor accordance with intended emotion in a greater degree than the (salted herring, mashed potatoes and pan cakes). The thought other emotions were, e.g. “anger”, “fear”, “sadness” and was that many different emotions can be held towards food and “surprise”, while other emotions are interpreted as expected to a that therefore the sentence was quite neutral in respect to lesser degree. -
Decoding Emotions from Nonverbal Vocalizations: How Much Voice Signal Is Enough?
Motivation and Emotion https://doi.org/10.1007/s11031-019-09783-9 ORIGINAL PAPER Decoding emotions from nonverbal vocalizations: How much voice signal is enough? Paula Castiajo1 · Ana P. Pinheiro1,2 © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract How much acoustic signal is enough for an accurate recognition of nonverbal emotional vocalizations? Using a gating paradigm (7 gates from 100 to 700 ms), the current study probed the efect of stimulus duration on recognition accuracy of emotional vocalizations expressing anger, disgust, fear, amusement, sadness and neutral states. Participants (n = 52) judged the emotional meaning of vocalizations presented at each gate. Increased recognition accuracy was observed from gates 2 to 3 for all types of vocalizations. Neutral vocalizations were identifed with the shortest amount of acoustic information relative to all other types of vocalizations. A shorter acoustic signal was required to decode amusement compared to fear, anger and sadness, whereas anger and fear required equivalent amounts of acoustic information to be accurately recognized. These fndings confrm that the time course of successful recognition of discrete vocal emotions varies by emotion type. Compared to prior studies, they additionally indicate that the type of auditory signal (speech prosody vs. nonverbal vocaliza- tions) determines how quickly listeners recognize emotions from a speaker’s voice. Keywords Nonverbal vocalizations · Emotion · Duration · Gate · Recognition Introduction F0, intensity) serving the expression of emotional mean- ing. Nonetheless, compared to speech prosody, nonverbal The voice is likely the most important sound category in a vocalizations are considered to represent more primitive social environment. It carries not only verbal information, expressions of emotions and an auditory analogue of facial but also socially relevant cues about the speaker, such as his/ emotions (Belin et al. -
Dichotic Training in Children with Auditory Processing Disorder
International Journal of Pediatric Otorhinolaryngology 110 (2018) 114–117 Contents lists available at ScienceDirect International Journal of Pediatric Otorhinolaryngology journal homepage: www.elsevier.com/locate/ijporl Dichotic training in children with auditory processing disorder T ∗ Maryam Delphia, Farzaneh Zamiri Abdollahib, a Musculoskeletal Rehabilitation Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran b Audiology Department, Tehran University of Medical Sciences, Tehran, Iran ARTICLE INFO ABSTRACT Keywords: Objectives: Several test batteries have been suggested for auditory processing disorder (APD) diagnosis. One of Dichotic listening the important tests is dichotic listening tests. Significant ear asymmetry (usually right ear advantage) can be Auditory training indicative of (APD). Two main trainings have been suggested for dichotic listening disorders: Differential Auditory processing Interaural Intensity Difference (DIID) and Dichotic Offset Training (DOT). The aim of the present study was comparing the efficacy of these two trainings in resolving dichotic listening disorders. Methods: 12 children in the age range of 8 to 9 years old with APD were included (mean age 8.41 years old ± 0.51). They all had abnormal right ear advantage based on established age-appropriate norms for Farsi dichotic digit test. Then subjects were randomly divided into two groups (each contained 6 subjects): group 1 received DIID training (8.33 years old ± 0.51) and group 2 received DOT training (8.50 years old ± 0.54). Results: Both trainings were effective in improvement of dichotic listening. There was a significant difference between two trainings with respect to the length of treatment (P-value≤0.001). DOT needed more training sessions (12.83 ± 0.98 sessions) than DIID (21.16 ± 0.75 sessions) to achieve the same amount of performance improvement. -
Neural Encoding of Attended Continuous Speech Under Different Types of Interference
Neural Encoding of Attended Continuous Speech under Different Types of Interference Andrea Olguin, Tristan A. Bekinschtein, and Mirjana Bozic Abstract ■ We examined how attention modulates the neural encoding Critically, however, the type of the interfering stream significantly of continuous speech under different types of interference. In modulated this process, with the fully intelligible distractor an EEG experiment, participants attended to a narrative in English (English) causing the strongest encoding of both attended and while ignoring a competing stream in the other ear. Four different unattended streams and latest dissociation between them and types of interference were presented to the unattended ear: a nonintelligible distractors causing weaker encoding and early dis- different English narrative, a narrative in a language unknown to sociation between attended and unattended streams. The results the listener (Spanish), a well-matched nonlinguistic acoustic inter- were consistent over the time course of the spoken narrative. ference (Musical Rain), and no interference. Neural encoding of These findings suggest that attended and unattended information attended and unattended signals was assessed by calculating can be differentiated at different depths of processing analysis, cross-correlations between their respective envelopes and the with the locus of selective attention determined by the nature EEG recordings. Findings revealed more robust neural encoding of the competing stream. They provide strong support to flexible for the attended envelopes compared with the ignored ones. accounts of auditory selective attention. ■ INTRODUCTION “late selection” approaches. The early selection theory Directingattentiontoasinglespeakerinamultitalker (Broadbent, 1958) argued that, because of our limited environment is an everyday occurrence that we manage processing capacity, attended and unattended informa- with relative ease. -
Individual Differences in Working Memory Capacity and Divided Attention in Dichotic Listening
Psychonomic Bulletin & Review 2007, 14 (4), 699-703 Individual differences in working memory capacity and divided attention in dichotic listening GREGORY J. H. COLFLESH University of Illinois, Chicago, Illinois AND ANDREW R. A. CONWAY Princeton University, Princeton, New Jersey The controlled attention theory of working memory suggests that individuals with greater working memory capacity (WMC) are better able to control or focus their attention than individuals with lesser WMC. This re- lationship has been observed in a number of selective attention paradigms including a dichotic listening task (Conway, Cowan, & Bunting, 2001) in which participants were required to shadow words presented to one ear and ignore words presented to the other ear. Conway et al. found that when the participant’s name was presented to the ignored ear, 65% of participants with low WMC reported hearing their name, compared to only 20% of participants with high WMC, suggesting greater selective attention on the part of high WMC participants. In the present study, individual differences in divided attention were examined in a dichotic listening task, in which participants shadowed one message and listened for their own name in the other message. Here we find that 66.7% of high WMC and 34.5% of low WMC participants detected their name. These results suggest that as WMC capacity increases, so does the ability to control the focus of attention, with high WMC participants being able to flexibly “zoom in” or “zoom out” depending on task demands. Since Baddeley and Hitch (1974) first published their tasks). They found no difference between high and low seminal chapter on working memory (WM), many theo- WMC participants in the prosaccade task but high WMC ries regarding the construct have been proposed. -
Pell Biolpsychology 2015 0.Pdf
Biological Psychology 111 (2015) 14–25 Contents lists available at ScienceDirect Biological Psychology journal homepage: www.elsevier.com/locate/biopsycho Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody a,b,∗ a a c a b M.D. Pell , K. Rothermich , P. Liu , S. Paulmann , S. Sethi , S. Rigoulot a School of Communication Sciences and Disorders, McGill University, Montreal, Canada b International Laboratory for Brain, Music, and Sound Research, Montreal, Canada c Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom a r t i c l e i n f o a b s t r a c t Article history: This study used event-related brain potentials (ERPs) to compare the time course of emotion process- Received 17 March 2015 ing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated Received in revised form 4 August 2015 preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo- Accepted 19 August 2015 utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal Available online 22 August 2015 expression type and emotion were analyzed for three ERP components (N100, P200, late positive compo- nent). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited Keywords: stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger Emotional communication vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar Vocal expression but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitiv- Speech prosody ERPs ity to vocal emotions (particularly vocalizations). -
Broadbent's Filter Theory Cherry: the Cocktail Party Problem
194 Part II ● Cognitive psychology KEY STUDY EVALUATION — Cherry Cherry: The cocktail party problem The research by Colin Cherry is a very good example of how a psycholo- gist, noticing a real-life situation, is able to devise a hypothesis and carry Cherry (1953) found that we use physical differences between out research in order to explain a phenomenon, in this case the “cocktail the various auditory messages to select the one of interest. party” effect. Cherry tested his ideas in a laboratory using a shadowing These physical differences include differences in the sex of the technique and found that participants were really only able to give infor- speaker, in voice intensity, and in the location of the speaker. mation about the physical qualities of the non-attended message When Cherry presented two messages in the same voice to (whether the message was read by a male or a female, or if a tone was both ears at once (thereby removing these physical differences), used instead of speech). Cherry’s research could be criticised for having moved the real-life phenomenon into an artificial laboratory setting. the participants found it very hard to separate out the two However, this work opened avenues for other researchers, beginning with messages purely on the basis of meaning. Broadbent, to elaborate theories about focused auditory attention. Cherry (1953) also carried out studies using a shadowing task, in which one auditory message had to be shadowed (repeated back out aloud) while a second auditory message was presented to the other ear. Very little information seemed to be obtained from the second or non-attended message. -
Differential Influences of Emotion, Task, and Novelty on Brain Regions Underlying the Processing of Speech Melody
Differential Influences of Emotion, Task, and Novelty on Brain Regions Underlying the Processing of Speech Melody Thomas Ethofer1,2, Benjamin Kreifelts1, Sarah Wiethoff1, 1 1 2 Jonathan Wolf , Wolfgang Grodd , Patrik Vuilleumier , Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/21/7/1255/1760188/jocn.2009.21099.pdf by guest on 18 May 2021 and Dirk Wildgruber1 Abstract & We investigated the functional characteristics of brain re- emotion than word classification, but were also sensitive to gions implicated in processing of speech melody by present- anger expressed by the voices, suggesting that some percep- ing words spoken in either neutral or angry prosody during tual aspects of prosody are also encoded within these regions a functional magnetic resonance imaging experiment using subserving explicit processing of vocal emotion. The bilateral a factorial habituation design. Subjects judged either affective OFC showed a selective modulation by emotion and repeti- prosody or word class for these vocal stimuli, which could be tion, with particularly pronounced responses to angry prosody heard for either the first, second, or third time. Voice-sensitive during the first presentation only, indicating a critical role of temporal cortices, as well as the amygdala, insula, and medio- the OFC in detection of vocal information that is both novel dorsal thalami, reacted stronger to angry than to neutral pros- and behaviorally relevant. These results converge with previ- ody. These stimulus-driven effects were not influenced by the ous findings obtained for angry faces and suggest a general task, suggesting that these brain structures are automatically involvement of the OFC for recognition of anger irrespective engaged during processing of emotional information in the of the sensory modality. -
Valence, Arousal, and Task Effects in Emotional Prosody Processing
ORIGINAL RESEARCH ARTICLE published: 21 June 2013 doi: 10.3389/fpsyg.2013.00345 Valence, arousal, and task effects in emotional prosody processing Silke Paulmann 1*, Martin Bleichner 2 and Sonja A. Kotz 3,4* 1 Department of Psychology and Centre for Brain Science, University of Essex, Colchester, UK 2 Department of Neurology and Neurosurgery, Rudolf Magnus Institute of Neuroscience, University Medical Center Utrecht, Utrecht, Netherlands 3 Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany 4 School of Psychological Sciences, University of Manchester, Manchester, UK Edited by: Previous research suggests that emotional prosody processing is a highly rapid and Anjali Bhatara, Université Paris complex process. In particular, it has been shown that different basic emotions can Descartes, France be differentiated in an early event-related brain potential (ERP) component, the P200. Reviewed by: Often, the P200 is followed by later long lasting ERPs such as the late positive complex. Lorena Gianotti, University of Basel, Switzerland The current experiment set out to explore in how far emotionality and arousal can Lucia Alba-Ferrara, University of modulate these previously reported ERP components. In addition, we also investigated the South Florida, USA influence of task demands (implicit vs. explicit evaluation of stimuli). Participants listened *Correspondence: to pseudo-sentences (sentences with no lexical content) spoken in six different emotions Silke Paulmann, Department of or in a neutral tone of voice while they either rated the arousal level of the speaker or Psychology, Centre for Brain Science, University of Essex, their own arousal level. Results confirm that different emotional intonations can first be Wivenhoe Park, Colchester CO4 differentiated in the P200 component, reflecting a first emotional encoding of the stimulus 3SQ, UK possibly including a valence tagging process. -
Evaluating CAPDOTS Training for Children and Adolescents with APD by Jay R
ONLINE EXCLUSIVE Evaluating CAPDOTS Training for Children and Adolescents with APD By Jay R. Lucker, EdD, CCC-A/SLP, FAAA; Cydney Fox, AuD; and Bea Braun, AuD uditory processing disorders (APD) in school-age children can lead to learning problems.1 Audiolo- gists may determine the presence or absence of APD in this population and make specific thera- Apeutic recommendations. However, many therapies for APD have not been peer-reviewed or examined with statistical analysis to determine efficacy. The present study evaluates a therapeutic option called CAPDOTS-Integrated (also referred to as CAPDOTS), an online training program that aims to im- prove dichotic listening problems. According to Jerger, “Dichotic listening (DL) tests are at the core of the diagnostic evaluation of auditory processing disorder... [Such tests] have been used for decades both as 2 screening tools and as diagnostic tests in APD evaluation.” Thus, audiologists evaluating school-aged children for APD AdobeStock may find these children to have dichotic listening problems. Those diagnosed with dichotic listening problems are rec- subjects.2 The 27-year-old subject identified as diagnosed ommended to try a dichotic listening training program, such with “binaural integration deficit” only completed 80 percent as CAPDOTS-Integrated.1 Presently, CAPDOTS-Integrated of the CAPDOTS training. The 16-year-old subject is re- claims to treat binaural auditory integration problems known ported diagnosed with a binaural integration (dichotic listen- as dichotic listening difficulties. CAPDOTS is a treatment ing) auditory processing disorder. Results of mid-latency program requiring access to the internet and good quality electrophysiological measures revealed increased response headphones. -
Attentional Strategies in Dichotic Listening
Memory & Cognition 1979, Vol. 7 (6),511.520 Attentional strategies in dichotic listening JACK BOOKBINDER and ELI OSMAN Brooklyn College ofthe City University ofNew York, Brooklyn, New York 11210 A person can attend to a message in one ear while seemingly ignoring a simultaneously presented verbal message in the other ear. There is considerable controversy over the extent to which the unattended message is actually processed. This issue was investigated by presenting dichotic messages to which the listeners responded by buttonpressing (not shadow ing) to color words occurring in the primary ear message while attempting to detect a target word in either the primary ear or secondary ear message. Less than 40% of the target words were detected in the secondary ear message, whereas for the primary ear message (and also for either ear in a control experiment), target detection was approximately 80%. Furthermore. there was a significant negative correlation between buttonpressing performance and secondary ear target-detection performance. The results were interpreted as being inconsistent with automatic processing theories of attention. The dichotic listening paradigm, beginning with the processing system. Early selection models (Broadbent, experiments of Broadbent (1958) and Cherry (1953), 1958,1971; Treisman, 1960, 1964; Treisman & Geffen, has played a central role in the study of attention during 1967) allow for analyses of simple physical features the past two decades. In a dichotic presentation, differ simultaneously (e.g., spatial location, voice pitch), while ent auditory messages are simultaneously presented to it is postulated that the simultaneous higher order (Le., the subject's right and left ears by means of stereo head semantic context) analysis of all inputs would overload phones.