Electrophysiological Correlates of Voice Learning and Recognition

Electrophysiological Correlates of Voice Learning and Recognition

The Journal of Neuroscience, August 13, 2014 • 34(33):10821–10831 • 10821 Behavioral/Cognitive Electrophysiological Correlates of Voice Learning and Recognition Romi Za¨ske,1 Gregor Volberg,2 Gyula Kova´cs,3 and Stefan Robert Schweinberger1 1Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, 2Department for Psychology, Institute of Psychology, and 3Institute of Psychology, Friedrich Schiller University of Jena, 07743 Jena, Germany Listeners can recognize familiar human voices from variable utterances, suggesting the acquisition of speech-invariant voice represen- tations during familiarization. However, the neurocognitive mechanisms mediating learning and recognition of voices from natural speech are currently unknown. Using electrophysiology, we investigated how representations are formed during intentional learning of initially unfamiliar voices that were later recognized among novel voices. To probe the acquisition of speech-invariant voice representa- tions, we compared a “same sentence” condition, in which speakers repeated the study utterances at test, and a “different sentence” condition. Although recognition performance was higher for same compared with different sentences, substantial voice learning also occurred for different sentences, with recognition performance increasing across consecutive study-test-cycles. During study, event- related potentials elicited by voices subsequently remembered elicited a larger sustained parietal positivity (ϳ250–1400 ms) compared with subsequently forgotten voices. This difference due to memory was unaffected by test sentence condition and may thus reflect the acquisition of speech-invariant voice representations. At test, voices correctly classified as “old” elicited a larger late positive component (300–700 ms) at Pz than voices correctly classified as “new.” This event-related potential OLD/NEW effect was limited to the same sentence condition and may thus reflect speech-dependent retrieval of voices from episodic memory. Importantly, a speech-independent effect for learned compared with novel voices was found in beta band oscillations (16–17 Hz) between 290 and 370 ms at central and right temporal sites. Our results are a first step toward elucidating the electrophysiological correlates of voice learning and recognition. Key words: ERPs; learning; memory; oscillations; speech; voice Introduction reported to process acoustical information regardless of voice The ability to recognize voices is crucial for social interactions, familiarity for simple vowels (Latinus, Crabbe, and Belin, 2011). particularly when visual information is absent. Whereas recog- In contrast, right inferior frontal areas seemed sensitive to nizing unfamiliar voices is error-prone (Clifford, 1980), familiar learned voices that were perceptually identical, suggesting their voice recognition is remarkably accurate across variable utter- implication in voice identity processing (but see Andics et al., ances (Skuk and Schweinberger, 2013). However, the neural 2010). This notion of hierarchical processing is consistent with mechanisms underlying the acquisition of representations of fa- traditional person perception models, which provide a concep- miliar voices remain poorly understood. tual framework for the present research (Belin et al., 2011). Ac- Understanding the transition from unfamiliar to familiar cordingly, after acoustic analysis, voices are compared with long- voices during learning is important, because processing familiar term voice representations. Crucially, for recognition across voices can be selectively impaired and relies on partially distinct utterances, these representations are thought to respond to famil- cortical areas (Van Lancker and Kreiman, 1987; Kriegstein and iar voices independent of speech content. Therefore, voice learn- Giraud, 2004). Temporal voice areas (Belin et al., 2000) were ing entails acquiring representations independently of low-level information, similar to face learning (Yovel and Belin, 2013). For faces, distinct event-related potentials (ERPs) marking Received Feb. 11, 2014; revised May 27, 2014; accepted June 17, 2014. these subprocesses include the occipitotemporal N170, which Author contributions: R.Z. and S.R.S. designed research; R.Z. performed research; R.Z. and G.V. analyzed data; R.Z., G.V., G.K., and S.R.S. wrote the paper. indexes face encoding preceding recognition (Eimer, 2011). This work was funded by the Programm zur Fo¨rderung der Drittmittelfa¨higkeit (2012) of the University of Jena The occipitotemporal N250 reflects the acquisition and acti- and by the Deutsche Forschungsgemeinschaft (Grant ZA 745/1-1, KO 3918/1-2; 2-1). We thank Leonie Fresz, Achim vation of representations for individual faces (Kaufmann, Ho¨tzel, Christoph Klebl, Katrin Lehmann, Carolin Leistner, Constanze Mu¨hl, Finn Pauls, Marie-Christin Perlich, Jo- Schweinberger, and Burton, 2009; Zimmermann and Eimer, hannes Pfund, Mathias Riedel, Saskia Rudat, and Meike Wilken for stimulus acquisition and editing and Bettina ϳ Kamchen for help with data acquisition. 2013). A subsequent centroparietal positivity (from 300 ms) The authors declare no competing financial interests. may reflect retrieval of episodic and/or semantic person infor- Correspondence should be addressed to Romi Za¨ske, Department for General Psychology and Cognitive Neuro- mation (Schweinberger and Burton, 2003; Wiese, 2012); a science, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany. E-mail: similar positivity emerges for correctly recognized versus new [email protected]. DOI:10.1523/JNEUROSCI.0581-14.2014 nonface items (OLD/NEW effects; Friedman and Johnson, Copyright © 2014 the authors 0270-6474/14/3410821-11$15.00/0 2000). Finally, ERP “differences due to subsequent memory” 10822 • J. Neurosci., August 13, 2014 • 34(33):10821–10831 Za¨ske et al. • Voice Learning and Recognition Figure 1. Top, Trial procedure for one study-test cycle. The example shows an “old” test voice presented with a different sentence than at study. Bottom, The 12 study-test cycles for same and different sentence blocks, respectively. (Dms) indicate successful face encoding into memory (Som- Stimuli. Stimuli were recordings from 48 adult native speakers of Ger- mer, Schweinberger, and Matt, 1991). man (24 female, age 18–25 years, mean age 22.0 years). All 48 speakers Auditory equivalents to these components are less well estab- uttered 12 German sentences (6 of which started with the article “der” lished. Whereas early auditory potentials (N1–P2) are sensitive to and “die,” respectively), resulting in 576 different stimuli. All sentences sound repetitions, including voices (Schweinberger, 2001; had the same syntactic structure and consisted of seven or eight syllables; Schweinberger et al., 2011b), the “frontotemporal positivity to for example, “Der Fahrer lenkt den Wagen” (“the driver steers the car”) and “Die Kundin kennt den Laden” (“the customer knows the shop”). voice” (Charest et al., 2009) discriminates voices from other ϳ Utterances were chosen based on neutral valence ratings of written con- sounds from 160 ms. Potential markers of voice recognition tent as determined in a pilot questionnaire. Twelve raters (9 female, mean ϳ include the frontocentral mismatch negativity ( 200 ms), which age 28 years, range 21–43) who did not participate in the main experi- may index detection of familiar among unfamiliar voices (Beau- ment judged 61 written sentences for their emotional valence as follows: chemin et al., 2006), and the parietal P3, which is sensitive to negative (Ϫ1), neutral (0), or positive (ϩ1). Twelve sentences with the voice repetitions (from ϳ300 ms) despite phonetic changes most neutral ratings (mean ratings between Ϫ0.2 and 0.2) were chosen as (Schweinberger et al., 2011b). However, electrophysiological cor- stimuli. Moreover, speakers were instructed to intonate sentences as relates of voice learning are currently unknown. emotionally neutral as possible. To standardize intonation and sentence We investigated the acquisition of familiar voice representa- duration and to keep regional accents to a minimum, speakers were tions in a recognition memory paradigm. Despite some interde- encouraged to mimic as closely as possible a prerecorded model speaker pendence of speech and speaker perception (Perrachione and (first author) presented via loudspeakers. Each sentence was recorded 4–5 times in a quiet and semi-anechoic room by means of a Sennheiser Wong, 2007), familiar voices can be recognized across utterances MD 421-II microphone with pop protection and a Zoom H4n audio (Skuk and Schweinberger, 2013). To disentangle speech- interface (16 bit resolution, 48 kHz sampling rate, stereo). The record- dependent and speech-invariant recognition, we tested voices ings with the least artifacts, the least background noise, and with clear with either the same or a different sentence than presented at pronunciation were chosen as stimuli. Using PRAAT software (Boersma study. Finally, because brain oscillations have been related to and Weenink, 2001), voice recordings were cut to contain one sentence memory (Du¨zel, Penny, and Burgess, 2010), we analyzed perfor- starting exactly at plosive onset of “Der”/“Die.” Voice recordings were mance, ERPs, and time-frequency data in electrophysiological then resampled to 44.1 kHz, converted to mono, and RMS normalized to responses to studied and novel voices. 70 dB. Voices were set into eight random

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us