Localising Foreign Accents in Speech Perception, Storage and Production

Total Page:16

File Type:pdf, Size:1020Kb

Localising Foreign Accents in Speech Perception, Storage and Production Localising foreign accents in speech perception, storage and production Yuki Asano Department of Linguistics, Faculty of Humanities University of Konstanz, Germany This dissertation is submitted for the degree of Doctor of Philosophy 2016 Konstanzer Online-Publikations-System (KOPS) URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-0-386434 ACKNOWLEDGEMENTS For their insightful comments, suggestions, questions, ideas, for this thesis Bettina Braun Ryoko Hayashi Aditi Lahiri René Kager For making my research project financially possible Bettina Braun The German Academic Exchange Service (DAAD) The German National Academic Foundation University of Konstanz For their general interest, comments, and inspiring ideas Bettina Braun Miriam Butt Nicole Dehé Bijarke Frellesvig Rolf Grawert Janet Grijzenhout Michele Gubian Carlos Guessenhoven Ryoko Hayashi Bari¸sKabak René Kager iv Aditi Lahiri Frans Plank Dominik Sasha Giuseppina Turco For correcting my English Filippo Cervelli For their technical support Michele Gubian Joachim Kleinmann Dominik Sasha Research assistants For their attendance of my experiments Participants For trusting in my research potential Bettina Braun Masahiko & Kyoko Asano For their mental support Masahiko & Kyoko Asano Bettina Braun Emilia & Filippo Cervelli Hannelore & Rolf Grawert Thank you very much! ZUSAMMENFASSUNG Die Dissertationsschrift beschäftigt sich mit der Suche nach möglichen Quellen eines fremdsprachlichen Akzents in der Prosodie, insbesondere in F0 und segmentaler Län- ge. Anhand der in den Kapiteln 2 bis 4 dargestellten Experimente wurde überprüft, ob ein fremdsprachlicher Akzent seine Ursache in der Perzeption, in der Speicherung oder in der Produktion der fremdsprachlichen Prosodie findet. Die Untersuchung konzen- triert sich auf die Sprachen Japanisch (als Zielsprache, L2) und Deutsch (als Ausgangs- sprache, L1), da diese ein kontrastives prosodisches System aufweisen: In Bezug auf F0, is im Japanischen ein Tonhöhenakzent lexikalisch, während es im Deutschen mit einer post-lexikalischen oder paralinguistischen Bedeutung verwendet wird. In Bezug auf die segmentale Länge, zeichnet sich Japanisch sowohl durch seinen vokalischen als auch durch seinen konsonantischen lexikalischen Kontrast aus, während Deutsch nur einen begrenzten vokalischen Kontrast aufweist. Deshalb interessiert in dieser Studie das Erler- nen von lexikalischem Tonhöhenakzenten und von lexikalischen konsonantischen Län- gekontrasten bei deutschen Japanischlernenden. Ausgangspunkt der Untersuchung ist die Feststellung der nicht- normentsprechenden Produktionen sehr häufig verwendeter japanischer Wörter im Experiment 1 (im Kapitel 2), die bei deutschen Lernenden auftraten. Im Experiment wurde ein halb-spontanes Produktionsexperiment bei deutschen Japanischlernenden und japanischen Muttersprachlern durchgeführt. Dabei produzierten die Teilnehemen- den japanische und deutsche Wörter (Sumimasen, Konnichiwa und Entschuldigung, alle bedeutend „Entschuldigung”, um jemanden zu rufen) in gegebenen fiktiven Situa- tionen und wiederholten sie dieselben Wörter dreimal. Die Analyse der Realisierung des japanischen lexikalischen Tonhöhenakzents und der segmentalen Länge zeigte, dass die deutschen Japanischlernenden 1) den japanischen lexikalischen fest definier- ten Tonhöhenakzent phonologisch stets variierten und 2) die japanische segmentale Längestruktur nicht normentsprechend produzieren konnten. Diese Abweichungen der L2-Produktionen von den L1-Produktionen führten zu der Annahme, dass die L2- vi Lernenden entweder L2-prosodische Information anders hörten (= Schwierigkeiten in der Anfangsphase der Sprachperzeption) oder diese in ihrem mentalen Lexikon anderes speicherten als L1-Sprecher (= Schwierigkeiten, die mit mentalen Repräsentationen verbunden sind). Oder möglicherweise hatten sie Schwierigkeiten bei der Artikulation. Diese drei Etappen der Sprachverarbeitung wurden Schritt für Schritt in den Experimen- ten 2, 3 und 4 genauer untersucht. Die Sprachperzeption sowie die Sprachproduktion, die entweder den Zugriff auf die mentalen Repräsentationen erforderten, oder die, das nicht notwendigerweise erforderten, wurden durch die Manipulation der kognitiven Belastung des Arbeitsgedächtnisses im Hinblick auf die Speicherauslastung getestet. Darüber hinaus wurden aufgaben-irrelevante prosodische Dimensionen den Stimuli hinzugefügt, um zu testen, ob die erhöhte Steuerung der Aufmerksamkeit zur Instabilität der L2-Sprachverarbeitung im Vergleich zu der L1 Sprachverarbeitung führen würde. Gleiche Stimuli wurden in den folgenden Experimenten verwendet. Dabei wurden dieselben Teilnehmenden getestet. In den Experimenten 2 und 3 (im Kapitel 3 und 4) wurden zwei der drei oben genann- ten Etappen, nämlich Sprachperzeption und mentale Repräsentationen untersucht. Ge- nauer gesagt, ging es um die Frage, ob L2-Lernende Schwierigkeiten haben, akustische Korrelate eines prosodischen Kontrastes wahrzunehmen oder diese phonologisch zu speichern und abzurufen. Im Experiment 2 wurden segmentale Längekontraste (voka- lische und konsonantische Längekontraste) und im Experiment 3 Kontraste eines Ton- höhenakzentes (flacher vs. fallender F0) untersucht. In beiden Experimenten wurden jeweils AX (same-different)-Diskriminationsaufgaben durchgeführt, bei denen die Teil- nehmenden entweder eine Antwort „gleich” oder „unterschiedlich” zu jedem Stimulus- paar gegeben hatten. Eine Besonderheit der beschriebenen Experimenten bestand dar- in, dass der Zeitabstand zwischen den zwei Stimuli (A und X) (= Interstimulus-Intervall, ISI) variiert wurde (300 ms vs. 2500 ms). Dabei wurde angenommen, dass die Bedingung mit einem kürzeren ISI die Sprachperzeption des akustischen Korrelates des Kontras- tes testete, während die Bedingung mit einem längeren ISI die Sprachperzeption testete, die mentalen Repräsentationen in einem größeren Ausmaß mit aktiviert hat. Unter der Bedingung mit einem längeren ISI hätten die phonetischen Informationen des ersten Stimulus gespeichert werden müssen, um diese mit denen des zweiten Stimulus verglei- chen zu können. Diese Annahme basiert auf der Theorie des Arbeitsgedächtnises, die besagt, dass eine phonetische Information nach etwa 2 Sekunden verloren geht. Zusätz- lich wurde die Komplexität der Stimuli dadurch erhöht, dass aufgaben-irrelevante flache und fallende Tonhöhebewegung für die Diskriminierung der segmentalen Längekontras- vii te (im Experiment 2) und aufgaben-irrelevante segmentale Länge für die Diskriminie- rung der Tonöhenkontraste (im Experiment 3) hinzugefügt wurden. Getestet wurden 24 japanische Muttersprachler, 48 deutsche L2-Lernende (des Japanischen) und 24 deut- sche Nicht-Lernende (=naïve Hörer). Analysiert wurden die d’-Werte (das Maß für die Sensitivität zu Kontrasten) und Reaktionszeiten. Die Ergebnisse des Experimentes 2 zeigten, dass die d’-Werte und Reaktionszei- ten der japansichen Muttersprachlern in allen experimentellen Bedingungen konstant gleich hoch waren. Was die L2-Lernenden und die Nicht-Lernenden anbetrifft zeigten sie unter den Bedingungen mit niedrigsten Aufgabenanforderungen (mit kürzerem ISI, ohne aufgaben-irrelevante fallende Tonhöhebewegung) genauso hohe Sensitivität für den nicht-muttersprachlichen konsonantischen Längekontrast wie japanische Mutter- sprachler. Sogar die Nicht-Lernenden konnten dies unterscheiden, weil sich der Kon- trast durch den Vergleich auf der phonetischen Ebene erkennen ließ. Allerdings konnte eine solche Abhängigkeit vom phonetischen Vergleich nicht lange dauern. Sobald das ISI länger wurde und die Speicherauslastung höher wurde, so dass die phonetische In- formation mehr phonologisch verarbeitet werden musste, verringerte sich die Diskri- minationsfähigkeit der L2-Lernenden und der Nicht-Lernenden. Die d’-Werte der L2- Lernenden verringerten und unterschieden sich von denen der Muttersprachler, und die der Nicht-Lernenden sanken deutlich, so dass sich die Werte der L2-Lernenden und Nicht-Lernenden voneinander unterschieden. Wenn die aufgaben-irrelevante Tonhöhe- bewegung ins Spiel kam unterschieden sich die d’-Werte der L2-Lernenden und Nicht- Lernenden bereits in der kürzeren ISI-Bedingung von denen der japanischen Mutter- sprachlern, und dies galt auch unter der längeren ISI-Bedingung. Das gleiche Ergeb- nis wurde in der Analyse der Reaktionszeiten gefunden. Die Reaktionszeiten der Nicht- Lernenden unterschieden sich nicht von denen der japanischen Muttersprachler in der flachen Tonhöhe- und kürzeren ISI-Bedingung. In der längeren ISI-Bedingung wur- den nur die Reaktionszeiten der Nicht-Lernenden länger. In der fallenden Tonhöhebe- dingung unterschieden sich die Reaktionszeiten der drei Gruppen bereits in der kür- zeren ISI-Bedingung voneinander. Zusätzlich hat die Analyse der Reaktionszeiten ge- zeigt, dass die L2-Lernenden für die Entscheidung generell länger brauchten als die Nicht-Lernenden. Die Reaktionszeiten der Nicht-Lernenden waren genauso kurz wie die der japanischen Muttersprachler wenn die aufgaben-irrelevante Tonhöhebewegung nicht vorhanden war. Mit der aufgaben-irrelevanten fallenden Tonhöhebedingung ver- längerten sich die Reaktionszeiten von den Nicht-Lernenden, so dass sie sich von de- nen der japanischen Muttersprachler unterschieden. Der Vergleich zwischen den Er- viii gebnissen der flachen und fallenden Tonhöhebedingungen zeigte eine konsistente Wir- kung der aufgaben-irrelevanten Tonhöhebewegung auf die Diskriminierung des nicht- muttersprachlichen segmentalen Längekontrastes: Der konsonantische Längekontrast dargestellt mit der fallenden Tonhöhenbewegung
Recommended publications
  • Download Article (PDF)
    Libri Phonetica 1999;56:105–107 = Part I, ‘Introduction’, includes a single Winifred Strange chapter written by the editor. It is an Speech Perception and Linguistic excellent historical review of cross-lan- Experience: Issues in Cross-Language guage studies in speech perception pro- Research viding a clear conceptual framework of York Press, Timonium, 1995 the topic. Strange presents a selective his- 492 pp.; $ 59 tory of research that highlights the main ISBN 0–912752–36–X theoretical themes and methodological paradigms. It begins with a brief descrip- How is speech perception shaped by tion of the basic phenomena that are the experience with our native language and starting points for the investigation: the by exposure to subsequent languages? constancy problem and the question of This is a central research question in units of analysis in speech perception. language perception, which emphasizes Next, the author presents the principal the importance of crosslinguistic studies. theories, methods, findings and limita- Speech Perception and Linguistic Experi- tions in early cross-language research, ence: Issues in Cross-Language Research focused on categorical perception as the contains the contributions to a Workshop dominant paradigm in the study of adult in Cross-Language Perception held at the and infant perception in the 1960s and University of South Florida, Tampa, Fla., 1970s. Finally, Strange reviews the most USA, in May 1992. important findings and conclusions in This text may be said to represent recent cross-language research (1980s and the first compilation strictly focused on early 1990s), which yielded a large theoretical and methodological issues of amount of new information from a wider cross-language perception research.
    [Show full text]
  • The Action Simulation for Auditory Prediction (ASAP) Hypothesis
    HYPOTHESIS AND THEORY ARTICLE published: 13 May 2014 SYSTEMS NEUROSCIENCE doi: 10.3389/fnsys.2014.00057 The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis Aniruddh D. Patel 1*,† and John R. Iversen 2† 1 Department of Psychology, Tufts University, Medford, MA, USA 2 Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA Edited by: Every human culture has some form of music with a beat: a perceived periodic pulse Jonathan B. Fritz, University of that structures the perception of musical rhythm and which serves as a framework for Maryland, USA synchronized movement to music. What are the neural mechanisms of musical beat Reviewed by: perception, and how did they evolve? One view, which dates back to Darwin and implicitly Preston E. Garraghty, Indiana University, USA informs some current models of beat perception, is that the relevant neural mechanisms Hugo Merchant, Universidad are relatively general and are widespread among animal species. On the basis of recent Nacional Autónoma de México, neural and cross-species data on musical beat processing, this paper argues for a Mexico different view. Here we argue that beat perception is a complex brain function involving Josef P.Rauschecker, Georgetown University School of Medicine, USA temporally-precise communication between auditory regions and motor planning regions *Correspondence: of the cortex (even in the absence of overt movement). More specifically, we propose that Aniruddh D. Patel, Department of simulation of periodic movement in motor planning regions provides a neural signal that Psychology, Tufts University, 490 helps the auditory system predict the timing of upcoming beats.
    [Show full text]
  • Relations Between Speech Production and Speech Perception: Some Behavioral and Neurological Observations
    14 Relations between Speech Production and Speech Perception: Some Behavioral and Neurological Observations Willem J. M. Levelt One Agent, Two Modalities There is a famous book that never appeared: Bever and Weksel (shelved). It contained chapters by several young Turks in the budding new psycho- linguistics community of the mid-1960s. Jacques Mehler’s chapter (coau- thored with Harris Savin) was entitled “Language Users.” A normal language user “is capable of producing and understanding an infinite number of sentences that he has never heard before. The central problem for the psychologist studying language is to explain this fact—to describe the abilities that underlie this infinity of possible performances and state precisely how these abilities, together with the various details . of a given situation, determine any particular performance.” There is no hesi­ tation here about the psycholinguist’s core business: it is to explain our abilities to produce and to understand language. Indeed, the chapter’s purpose was to review the available research findings on these abilities and it contains, correspondingly, a section on the listener and another section on the speaker. This balance was quickly lost in the further history of psycholinguistics. With the happy and important exceptions of speech error and speech pausing research, the study of language use was factually reduced to studying language understanding. For example, Philip Johnson-Laird opened his review of experimental psycholinguistics in the 1974 Annual Review of Psychology with the statement: “The fundamental problem of psycholinguistics is simple to formulate: what happens if we understand sentences?” And he added, “Most of the other problems would be half­ way solved if only we had the answer to this question.” One major other 242 W.
    [Show full text]
  • Colored-Speech Synaesthesia Is Triggered by Multisensory, Not Unisensory, Perception Gary Bargary,1,2,3 Kylie J
    PSYCHOLOGICAL SCIENCE Research Report Colored-Speech Synaesthesia Is Triggered by Multisensory, Not Unisensory, Perception Gary Bargary,1,2,3 Kylie J. Barnett,1,2,3 Kevin J. Mitchell,2,3 and Fiona N. Newell1,2 1School of Psychology, 2Institute of Neuroscience, and 3Smurfit Institute of Genetics, Trinity College Dublin ABSTRACT—Although it is estimated that as many as 4% of sistent terminology reflects an underlying lack of understanding people experience some form of enhanced cross talk be- about the amount of information processing required for syn- tween (or within) the senses, known as synaesthesia, very aesthesia to be induced. For example, several studies have little is understood about the level of information pro- found that synaesthesia can occur very rapidly (Palmeri, Blake, cessing required to induce a synaesthetic experience. In Marois, & Whetsell, 2002; Ramachandran & Hubbard, 2001; work presented here, we used a well-known multisensory Smilek, Dixon, Cudahy, & Merikle, 2001) and is sensitive to illusion called the McGurk effect to show that synaesthesia changes in low-level properties of the inducing stimulus, such as is driven by late, perceptual processing, rather than early, contrast (Hubbard, Manoha, & Ramachandran, 2006) or font unisensory processing. Specifically, we tested 9 linguistic- (Witthoft & Winawer, 2006). These findings suggest that syn- color synaesthetes and found that the colors induced by aesthesia is an automatic association driven by early, unisensory spoken words are related to what is perceived (i.e., the input. However, attention, semantic information, and feature- illusory combination of audio and visual inputs) and not to binding processes (Dixon, Smilek, Duffy, Zanna, & Merikle, the auditory component alone.
    [Show full text]
  • Perception and Awareness in Phonological Processing: the Case of the Phoneme
    See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/222609954 Perception and awareness in phonological processing: the case of the phoneme ARTICLE in COGNITION · APRIL 1994 Impact Factor: 3.63 · DOI: 10.1016/0010-0277(94)90032-9 CITATIONS DOWNLOADS VIEWS 67 45 94 2 AUTHORS: José Morais Regine Kolinsky Université Libre de Bruxelles Université Libre de Bruxelles 92 PUBLICATIONS 1,938 CITATIONS 103 PUBLICATIONS 1,571 CITATIONS SEE PROFILE SEE PROFILE Available from: Regine Kolinsky Retrieved on: 15 July 2015 Cognition, 50 (1994) 287-297 OOlO-0277/94/$07.00 0 1994 - Elsevier Science B.V. All rights reserved. Perception and awareness in phonological processing: the case of the phoneme JosC Morais*, RCgine Kolinsky Laboratoire de Psychologie exptrimentale, Universitt Libre de Bruxelles, Av. Ad. Buy1 117, B-l 050 Bruxelles, Belgium Abstract The necessity of a “levels-of-processing” approach in the study of mental repre- sentations is illustrated by the work on the psychological reality of the phoneme. On the basis of both experimental studies of human behavior and functional imaging data, it is argued that there are unconscious representations of phonemes in addition to conscious ones. These two sets of mental representations are func- tionally distinct: the former intervene in speech perception and (presumably) production; the latter are developed in the context of learning alphabetic literacy for both reading and writing purposes. Moreover, among phonological units and properties, phonemes may be the only ones to present a neural dissociation at the macro-anatomic level. Finally, it is argued that even if the representations used in speech perception and those used in assembling and in conscious operations are distinct, they may entertain dependency relations.
    [Show full text]
  • The Effects of Cognitive Load on the Perception of Foreign-Accented Words" (2016)
    Cleveland State University EngagedScholarship@CSU ETD Archive 2016 The Effects of Cognitive Load on the Perception of Foreign- Accented Words Leah M. Bonath Follow this and additional works at: https://engagedscholarship.csuohio.edu/etdarchive Part of the Psychology Commons How does access to this work benefit ou?y Let us know! Recommended Citation Bonath, Leah M., "The Effects of Cognitive Load on the Perception of Foreign-Accented Words" (2016). ETD Archive. 907. https://engagedscholarship.csuohio.edu/etdarchive/907 This Thesis is brought to you for free and open access by EngagedScholarship@CSU. It has been accepted for inclusion in ETD Archive by an authorized administrator of EngagedScholarship@CSU. For more information, please contact [email protected]. THE EFFECTS OF COGNTIVE LOAD ON THE PERCEPTION OF FOREIGN- ACCENTED WORDS LEAH M. BONATH Bachelor of Arts in Neuroscience Hiram College May, 2013 Submitted in partial fulfillment of requirements for the degree MASTER OF ARTS IN PSYCHOLOGY at the Cleveland State University May, 2016 We hereby approve this thesis For Leah M. Bonath Candidate for the Master’s of Arts degree for the Department of Psychology And CLEVELAND STATE UNIVERSITY’S College of Graduate Studies by Thesis Chairperson, Conor T. McLennan _____________________________________ Psychology & Date Methodologist and Committee Member, Albert F. Smith _____________________________________ Department & Date Committee Member, Eric Allard ______________________________________ Department & Date Committee Member, Andrew Slifkin _____________________________________ Psychology & Date _________April 21, 2016_________ Date of Defense ACKNOWLEDGEMENT I would first like to thank my advisor, Dr. Conor McLennan, for his invaluable guidance and assistance throughout this project. I greatly appreciate his dedication to seeing all of his students become more capable and knowledgeable researchers.
    [Show full text]
  • Theories of Speech Perception
    Theories of Speech Perception • Motor Theory (Liberman) • Auditory Theory – Close link between perception – Derives from general and production of speech properties of the auditory • Use motor information to system compensate for lack of – Speech perception is not invariants in speech signal species-specific • Determine which articulatory gesture was made, infer phoneme – Human speech perception is an innate, species-specific skill • Because only humans can produce speech, only humans can perceive it as a sequence of phonemes • Speech is special Wilson & friends, 2004 • Perception • Production • /pa/ • /pa/ •/gi/ •/gi/ •Bell • Tap alternate thumbs • Burst of white noise Wilson et al., 2004 • Black areas are premotor and primary motor cortex activated when subjects produced the syllables • White arrows indicate central sulcus • Orange represents areas activated by listening to speech • Extensive activation in superior temporal gyrus • Activation in motor areas involved in speech production (!) Wilson and colleagues, 2004 Is categorical perception innate? Manipulate VOT, Monitor Sucking 4-month-old infants: Eimas et al. (1971) 20 ms 20 ms 0 ms (Different Sides) (Same Side) (Control) Is categorical perception species specific? • Chinchillas exhibit categorical perception as well Chinchilla experiment (Kuhl & Miller experiment) “ba…ba…ba…ba…”“pa…pa…pa…pa…” • Train on end-point “ba” (good), “pa” (bad) • Test on intermediate stimuli • Results: – Chinchillas switched over from staying to running at about the same location as the English b/p
    [Show full text]
  • Effects of Language on Visual Perception
    Effects of Language on Visual Perception Gary Lupyan1a, Rasha Abdel Rahmanb, Lera Boroditskyc, Andy Clarkd aUniversity of Wisconsin-Madison bHumboldt-Universität zu Berlin cUniversity of California San Diego dUniversity of Sussex Abstract Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and elec- trophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition, and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception. Keywords: language; perception; vision; categorization; top-down effects; prediction “Even comparatively simple acts of perception are very much more at the mercy of the social patterns called words than we might suppose.” [1]. “No matter how influential language might be, it would seem preposter- ous to a physiologist that it could reach down into the retina and rewire the ganglion cells” [2]. 1Correspondence: [email protected] Preprint submitted to Trends in Cognitive Sciences August 22, 2020 Language as a form of experience that affects perception What factors influence how we perceive the world? For example, what makes it possible to recognize the object in Fig. 1a? Or to locate the ‘target’ in Fig. 1b? Where is the head of the bird in Fig.
    [Show full text]
  • How Infant Speech Perception Contributes to Language Acquisition Judit Gervain* and Janet F
    Language and Linguistics Compass 2/6 (2008): 1149–1170, 10.1111/j.1749-818x.2008.00089.x How Infant Speech Perception Contributes to Language Acquisition Judit Gervain* and Janet F. Werker University of British Columbia Abstract Perceiving the acoustic signal as a sequence of meaningful linguistic representations is a challenging task, which infants seem to accomplish effortlessly, despite the fact that they do not have a fully developed knowledge of language. The present article takes an integrative approach to infant speech perception, emphasizing how young learners’ perception of speech helps them acquire abstract structural properties of language. We introduce what is known about infants’ perception of language at birth. Then, we will discuss how perception develops during the first 2 years of life and describe some general perceptual mechanisms whose importance for speech perception and language acquisition has recently been established. To conclude, we discuss the implications of these empirical findings for language acquisition. 1. Introduction As part of our everyday life, we routinely interact with children and adults, women and men, as well as speakers using a dialect different from our own. We might talk to them face-to-face or on the phone, in a quiet room or on a busy street. Although the speech signal we receive in these situations can be physically very different (e.g., men have a lower-pitched voice than women or children), we usually have little difficulty understanding what our interlocutors say. Yet, this is no easy task, because the mapping from the acoustic signal to the sounds or words of a language is not straightforward.
    [Show full text]
  • The Role of Socio-Indexical Information in Regional Accent Perception by Five to Seven Year Old Children
    The Role of Socio-indexical Information in Regional Accent Perception by Five to Seven Year Old Children by Erica Lynn Beck A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Linguistics) in The University of Michigan 2014 Doctoral Committee: Assistant Professor Carmel O’Shannessy, Chair Professor Marlyse Baptista Professor Susan Gelman Associate Professor Robin Queen ACKNOWLEDGEMENTS It’s hard to know where to start - there’s such a long list of people without whom this thing would have never been finished. Thank you to my committee members, who have been very generous with their time and advice. I really appreciate the fact that although I was out of sight the last year, I wasn’t out of mind at all. Thank you for being so easy to work with and so very supportive. A special thank-you to Carmel, who read many drafts, and always liberally uses the word “brilliant.” It may be dialect variation, but it certainly is an ego-booster! Thank you to Dr. Samuels, Superintendent of Schools, the Kindergarten teachers and principals at the sites where this study was conducted for allowing me to conduct this study in your schools. I was touched by the large number of families who chose to participate, and how helpful they were. It was an honor to work with you, and I am very proud that I was able to return to my alma mater to finish my formal education where it started! Thank you to my sister-in-law, Jessica Gray, and her friends who kindly agreed to spend an afternoon recording lists of words for the stimuli.
    [Show full text]
  • Speech Perception
    UC Berkeley Phonology Lab Annual Report (2010) Chapter 5 (of Acoustic and Auditory Phonetics, 3rd Edition - in press) Speech perception When you listen to someone speaking you generally focus on understanding their meaning. One famous (in linguistics) way of saying this is that "we speak in order to be heard, in order to be understood" (Jakobson, Fant & Halle, 1952). Our drive, as listeners, to understand the talker leads us to focus on getting the words being said, and not so much on exactly how they are pronounced. But sometimes a pronunciation will jump out at you - somebody says a familiar word in an unfamiliar way and you just have to ask - "is that how you say that?" When we listen to the phonetics of speech - to how the words sound and not just what they mean - we as listeners are engaged in speech perception. In speech perception, listeners focus attention on the sounds of speech and notice phonetic details about pronunciation that are often not noticed at all in normal speech communication. For example, listeners will often not hear, or not seem to hear, a speech error or deliberate mispronunciation in ordinary conversation, but will notice those same errors when instructed to listen for mispronunciations (see Cole, 1973). --------begin sidebar---------------------- Testing mispronunciation detection As you go about your daily routine, try mispronouncing a word every now and then to see if the people you are talking to will notice. For instance, if the conversation is about a biology class you could pronounce it "biolochi". After saying it this way a time or two you could tell your friend about your little experiment and ask if they noticed any mispronounced words.
    [Show full text]
  • Accent Processing in Dementia
    Neuropsychologia 50 (2012) 2233–2244 Contents lists available at SciVerse ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia Accent processing in dementia Julia C. Hailstone a, Gerard R. Ridgway a,b, Jonathan W. Bartlett a,c, Johanna C. Goll a, Sebastian J. Crutch a, Jason D. Warren a,n a Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK b Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK c Department of Medical Statistics, London School of Hygiene & Tropical Medicine, London, UK article info abstract Article history: Accented speech conveys important nonverbal information about the speaker as well as presenting the Received 6 December 2011 brain with the problem of decoding a non-canonical auditory signal. The processing of non-native Received in revised form accents has seldom been studied in neurodegenerative disease and its brain basis remains poorly 10 May 2012 understood. Here we investigated the processing of non-native international and regional accents of Accepted 24 May 2012 English in cohorts of patients with Alzheimer’s disease (AD; n¼20) and progressive nonfluent aphasia Available online 1 June 2012 (PNFA; n¼6) in relation to healthy older control subjects (n¼35). A novel battery was designed to Keywords: assess accent comprehension and recognition and all subjects had a general neuropsychological Accent assessment. Neuroanatomical associations of accent processing performance were assessed using Voice voxel-based morphometry on MR brain images within the larger AD group. Compared with healthy Frontotemporal lobar degeneration controls, both the AD and PNFA groups showed deficits of non-native accent recognition and the PNFA Alzheimer’s disease Dementia group showed reduced comprehension of words spoken in international accents compared with a Southern English accent.
    [Show full text]