Crossmodal Interactions During Non-Linguistic Auditory Processing
Total Page:16
File Type:pdf, Size:1020Kb
Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients Pascal Barone, Laure Chambaudie, Kuzma Strelnikov, Bernard Fraysse, Mathieu Marx, Pascal Belin, Olivier Deguine To cite this version: Pascal Barone, Laure Chambaudie, Kuzma Strelnikov, Bernard Fraysse, Mathieu Marx, et al.. Cross- modal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients. Cor- tex, Elsevier, 2016, 83, pp.259 - 270. 10.1016/j.cortex.2016.08.005. hal-01469002 HAL Id: hal-01469002 https://hal-amu.archives-ouvertes.fr/hal-01469002 Submitted on 21 Feb 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Elsevier Editorial System(tm) for CORTEX Manuscript Draft Manuscript Number: CORTEX-D-16-00067R2 Title: Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients Article Type: Research Report Keywords: deafness; cochlear implantation; face; voice; audiovisual Corresponding Author: Dr. Pascal Barone, Corresponding Author's Institution: CNRS CERCO UMR 5549 First Author: Pascal Barone Order of Authors: Pascal Barone; Laure Chambaudie; Kuzma Strelnikov; Bernard Fraysse; Mathieu Marx; Pascal Belin; Olivier Deguine Abstract: Due to signal distortion, speech comprehension in cochlear- implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face- voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorization task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. *Title page Crossmodal interactions during non-linguistic auditory processing in cochlear- implanted deaf patients Pascal Barone1,2, Laure Chambaudie1,2, Kuzma Strelnikov1,2, Bernard Fraysse3, Mathieu Marx3, Pascal Belin4,5, Olivier Deguine1,2,3 Shortened title: Face–voice interactions in cochlear-implanted patients 1. Université Toulouse, CerCo, Université Paul Sabatier 2. CNRS, UMR 5549, Toulouse, France 3. Service Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France 4. Voice Neurocognition Laboratory, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK 5. Institut de Neurosciences de la Timone, CNRS UMR 7289 et Aix-Marseille Université, Marseille, France Corresponding author: Pascal Barone E-mail: [email protected] Centre de Recherche Cerveau & Cognition UMR 5549 Pavillon Baudot CHU Purpan 31062 Toulouse CEDEX9, France Phone: +33 (0)5 62 17 37 79 Fax: +33 (0)5 62 17 28 09 *Highlights We tested whether visual influence remains high in proficient CI deaf adults We presented a voice continuum with a male or female visual face CI deaf patients were strongly influenced by visual facial information No such effect is observed in controls, even for degraded auditory CI simulation Proficient CI deaf patients rely on visual cues during visuo-auditory perception *Manuscript - with changes highlighted Abstract 1 2 3 Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies 4 5 6 strongly on visual information, a compensatory strategy supported by important cortical 7 8 crossmodal reorganisations. Though crossmodal interactions are evident for speech 9 10 11 processing, it is unclear whether a visual influence is observed in CI patients during non- 12 13 linguistic visual-auditory processing, such as face–voice interactions, which are important in 14 15 16 social communication. We analyse and compare visual-auditory interactions in CI patients 17 18 and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI 19 20 21 patients and NHS performed a voice-gender categorisation in the visual-auditory modality 22 23 from a morphing-generated voice continuum between male and female speakers, while 24 25 ignoring the presentation of a male or female visual face. Our data show that during the face– 26 27 28 voice interaction, CI deaf patients are strongly influenced by visual information when 29 30 performing an auditory gender categorization task, in spite of maximum recovery of auditory 31 32 33 speech. No such effect is observed in NHS, even in situations of CI simulation. Our 34 35 hypothesis is that the functional crossmodal reorganisation that occurs in deafness could 36 37 38 influence nonverbal processing, such as face–voice interaction; this is important for patient 39 40 internal supramodal representation. 41 42 43 44 Key words: deafness; cochlear implantation; face; voice; audiovisual 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 65 1. Introduction 1 2 3 In profoundly deaf individuals, cochlear implant remains the most efficient solution to 4 5 6 recover speech intelligibility and to restore social interaction to improve patient quality of life. 7 8 However, sound processing performed by the implant provides only a crude signal that lacks 9 10 11 fine spectral information. As a result, while CI affords acceptable levels of speech 12 13 comprehension, spectral degradation impacts the ability of CI patients to process non- 14 15 16 linguistic aspects of speech, such as changes in prosody and intonation (Green, Faulkner, 17 18 Rosen, & Macherey, 2005; Marx et al., 2015; Peng, Chatterjee, & Lu, 2012) and most other 19 20 21 voice features. Indeed, CI patients present deficits in discriminating human voice from 22 23 environmental sounds (Massida et al., 2011), more specifically in recognising other voice 24 25 attributes, such as gender, familiarity, or emotions of the speaker (Fu, Chinchilla, & Galvin, 26 27 28 2004; Fu, Chinchilla, Nogaki, & Galvin, 2005; Kovacic & Balaban, 2009; Massida et al., 29 30 2013). 31 32 33 However, in addition to the technical limitations of the implant processor, it should be 34 35 considered that the brain reorganisation that occurs during deafness could be implicated in the 36 37 38 global deficit present in cochlear implanted (CI) patients during voice processing as this has 39 40 been proposed for auditory speech comprehension deficits (Lazard, Innes-Brown, & Barone, 41 42 2014). There is now compelling evidence that the success of CI for speech perception is 43 44 45 highly dependent on the age at which the implantation is performed (Kral & O'Donoghue, 46 47 2010). This reflects the potential of brain plasticity, which is critical for the recovery of 48 49 50 auditory function through the neuro-prosthesis during development (D. S. Lee et al., 2001; H. 51 52 J. Lee et al., 2007), and even in adults (Strelnikov et al., 2015). Brain imaging studies point to 53 54 55 a network of areas along the superior temporal sulcus and gyrus (STS and STG) that are 56 57 specifically sensitive to human voice stimuli (Belin, Zatorre, Lafaille, Ahad, & Pike, 2000; 58 59 Kriegstein & Giraud, 2004; Pernet et al., 2015); This set of areas is referred to as temporal 60 61 62 63 64 2 65 voice areas (TVAs) (Belin, Zatorre, & Ahad, 2002) and can be subdivided in various regions 1 2 with distinct implications in human vocal sounds processing (Pernet et al., 2015) . In adult 3 4 5 CI patients, TVAs are shown to be poorly activated by voice stimuli (Coez et al., 2008), a 6 7 result that questions their functional integrity after a prolonged period of auditory deprivation. 8 9 10 Further, it was demonstrated in deaf patients that the STS region, as part of the TVAs, is 11 12 subject to crossmodal reorganisation during deafness. Firstly, it has been shown that the STS 13 14 region responds to visual sign language in early deaf signers (Sadato et al., 2004) and 15 16 17 similarly, the auditory TVAs are involved in visual speech processing through lip-reading in 18 19 postlingual CI deaf patients (Rouger et al., 2012). In the cases of less severe hearing loss, 20 21 22 there are also some indications of the take-over of the temporal auditory regions by visual 23 24 functions (Campbell & Sharma, 2014). Lastly, there are numerous converging studies that 25 26 27 demonstrate that the level of crossmodal reorganisation of the temporal auditory