Hemispheric Association and Dissociation of Voice and Speech Information Processing in Stroke Anna Jones, Andrew Farrall, Pascal Belin, Cyril Pernet
Total Page:16
File Type:pdf, Size:1020Kb
Hemispheric association and dissociation of voice and speech information processing in stroke Anna Jones, Andrew Farrall, Pascal Belin, Cyril Pernet To cite this version: Anna Jones, Andrew Farrall, Pascal Belin, Cyril Pernet. Hemispheric association and dissociation of voice and speech information processing in stroke. Cortex, Elsevier, 2015. hal-01997402 HAL Id: hal-01997402 https://hal-amu.archives-ouvertes.fr/hal-01997402 Submitted on 29 Jan 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License cortex 71 (2015) 232e239 Available online at www.sciencedirect.com ScienceDirect Journal homepage: www.elsevier.com/locate/cortex Research report Hemispheric association and dissociation of voice and speech information processing in stroke Anna B. Jones a,b, Andrew J. Farrall a,b, Pascal Belin c,d and * Cyril R. Pernet a,b, a Brain Research Imaging Centre, The University of Edinburgh, UK b Centre for Clinical Brain Sciences, The University of Edinburgh, UK c Institute of Neuroscience and Psychology, University of Glasgow, UK d Institut des Neurosciences de La Timone, UMR 7289, CNRS & Universite Aix-Marseille, France article info abstract Article history: As we listen to someone speaking, we extract both linguistic and non-linguistic informa- Received 9 January 2015 tion. Knowing how these two sets of information are processed in the brain is fundamental Reviewed 21 April 2015 for the general understanding of social communication, speech recognition and therapy of Revised 22 May 2015 language impairments. We investigated the pattern of performances in phoneme versus Accepted 6 July 2015 gender categorization in left and right hemisphere stroke patients, and found an anatomo- Action editor Peter Garrard functional dissociation in the right frontal cortex, establishing a new syndrome in voice Published online 16 July 2015 discrimination abilities. In addition, phoneme and gender performances were most often associated than dissociated in the left hemisphere patients, suggesting a common neural Keywords: underpinnings. Voice gender perception © 2015 Elsevier Ltd. All rights reserved. Phoneme perception Hemispheric dissociation Stroke Aphasia achieve high performance levels, it has been hypothesized that 1. Introduction voice information (talker specific information) is extracted along with the speech signal, and then stripped away to access Speech perception is often seen as special (Liberman & (invariant) phonemic content: a process known as ‘speaker Mattingly, 1989) because localized brain injury can elicit spe- normalization’. This hypothesis is however challenged cific language impairments such as aphasia, and because because general auditory learning mechanisms are capable of healthy individuals are extremely efficient at categorizing explaining category formation in the absence of invariant phonemes and syllables despite large variations in the stim- acoustic information. Birds can learn speech consonant cate- ulus spectral patterns (Liberman, Delattre, & Cooper, 1952). To gories with no obvious acoustic invariant cue (Kluender, Diehl, * Corresponding author. Centre for Clinical Brain Sciences (CCBS), The University of Edinburgh, Chancellor's Building, Room GU426D, 49 Little France Crescent, Edinburgh EH16 4SB, UK. E-mail address: [email protected] (C.R. Pernet). http://dx.doi.org/10.1016/j.cortex.2015.07.004 0010-9452/© 2015 Elsevier Ltd. All rights reserved. cortex 71 (2015) 232e239 233 & Killeen, 1987) and human listeners can readily learn non- we hypothesized that left hemispheric aphasic patients will speech categories that are similarly structured (Wade & Holt, not show such dissociation, while non-aphasic patient could 2005). In addition, several studies showed that talker vari- be impaired for voice but not phoneme. ability influences speech perception. For instance the litera- ture describes increased memory for words spoken by familiar voices, compared to non-familiar voices (Nygaard & Pisoni, 2. Materials and methods 1998; Nygaard, Sommers, & Pisoni, 1994; Palmeri, Goldinger, & Pisoni, 1993), and similarly enhanced discrimination of, The experiment used (program and stimuli) is freely available and memory for, (non-familiar) speakers of our own language from http://dx.doi.org/10.6084/m9.figshare.1287284. It runs compared to speakers of another language (Language Famil- under Matlab with the psychophysical toolbox (Brainard, 1997; iarity Effect e Perrachione & Wong, 2007) even in the absence of Kleiner, Brainard, & Pelli, 2007). The behavioral data and intelligibility (Fleming, Giordano, Caldara, & Belin, 2014). Most scripts used to analyze the data are available from http://dx. of these studies do not, however, specifically address the issue doi.org/10.6084/m9.figshare.1287262. The imaging radiolog- of phoneme perception, and thus acoustical regularities com- ical analysis is also available with the behavioral data. CT or ing from multiple levels are at play. MRI scans could not be shared as they belong to the UK Na- Like speech perception, voice perception is often consid- tional Health Service (NHS) and not to the research team. The ered special (Belin, 2006; Belin, Fecteau, & Bedard, 2004; Scott, study was approved by the NHS Lothian South East Scotland 2008). Humans easily recognize different voices, and this Research Ethics Committee 01 (REC reference number: 11/SS/ ability is of considerable social importance. Voice-selective 0055) and NHS Lothian Research and Development (R&D areas have been demonstrated in the human brain (Belin, project number: 2011/W/NEU/09). Zattorre, & Ahad, 2002; Belin, Zattorre, Lafaille, Ahad, & Pike, 2000; Pernet et al., 2015), localized bilaterally along the upper 2.1. Participants bank (middle and anterior) of the Superior Temporal Sulcus (STS) (Alho et al., 2006; Belin, et al., 2002), and also in the infe- Twenty-five stroke patients (14 males, 11 females) with a rior and orbitofrontal cortex (Charest, Pernet, Latinus, Crabbe, median age of 69 years (min 39, max 85) were recruited into & Belin, 2012; Fecteau, Armony, Joanete, & Belin, 2005) as well this study. At the time of testing, all patients were at the as surrounding insular cortex (Johnstone, van Reekum, Oakes, chronic stage (median time between stroke and testing 90 ± 17 & Davidson, 2006; Rama et al., 2004). This neural selectivity for days). Participants were recruited as inpatients and out- voice has also been established in other mammals, in partic- patients from Lothian NHS hospitals via stroke physicians and ular primates (Johnstone et al., 2006; Rama et al., 2004) and Speech & Language Therapists between 10 and 60 weeks post- more recently dogs (Andics, Gacsi, Farago, Kis, & Miklosi, 2014). stroke with the sole inclusion criterion of a stroke affecting Given the presence of con-specific voice neural selectivity in perisylvian tissues (Supplementary Table 1). Exclusion criteria these animals, we can establish that neural voice selectivity is were the presence of a previous stroke and/or English not an old evolutionary feature (~100 million years for a common being the participant's first language. ancestor between humans and dogs, ~25 million years for a All patients were tested for their mood (Visual Analogue common ancestor between macaques and human), preceding Self Esteem Scale e VASES (Brumfitt & Sheeran, 1999) and The the appearance of speech (~5 to 2 million years ago for proto- Depression Intensity Scale Circles e DISCS (Turner-Stokes, language and ~150,000 to 300,000 years ago for speech e Kalmus, Hirani, & Clegg, 2005) and language abilities (West- Perreault & Mathew, 2012). Following psycholinguistic studies ern Aphasia Battery e WAB, Shewan & Kertesz, 1980). No pa- suggesting that phonetic attributes are an intrinsic component tient had language deficits in the group with right hemisphere of the perception and identification of individuals, i.e., the lesions (N ¼ 9, 5 males and 4 females, WAB median score 98.8), recognition of voices (Remez, Fellowes, & Rubin, 1997), it is and 10 out of 16 patients showed signs of aphasia in the left possible that some brain regions dedicated to speech co-opted hemisphere group (N ¼ 10, 5 males and 5 females, WAB me- neurons already involved in con-specific voice processing. dian score 49.5 for aphasics vs N ¼ 6, 4 males and 2 females, In this study, we investigated how phoneme and talker WAB median score 99.4 for non-aphasics e percentile boot- information processing relate to each other, by comparing strap difference 49.8 [15 80] p ¼ 0). Kruskall-Wallis ANOVA performances of right fronto-temporal (non-aphasic), left showed that groups did not differ in terms of median age fronto-temporal aphasic and left fronto-temporal non- [c2(2,22) ¼ 4.58 p ¼ .1], in median time delay between stroke aphasic stroke patients. Each participant categorized sounds and testing [c2(2,22) ¼ 1.68 p ¼ .43] or depression scores from pitch equalized morphed continua as being mal- [c2(2,22) ¼ 4 p ¼ .13 for VASES and c2(2,22) ¼ 2.19 p ¼ .33