Phonology Versus Phonetics in Speech Sound Disorders
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 10 Phonology Versus Phonetics in Speech Sound Disorders WOLFRAM ZIEGLERa aInstitute of Phonetics and Speech Processing, Ludwig Maximilian University of Munich, Germany Correspondence to Wolfram Ziegler: [email protected] Abstract: Historically, aphasiologists have created a deep chasm between phonological and phonetic impairments of sound production. Today, there is still a fundamental separation of aphasic–phonological impairment from phonetic planning impairment (apraxia of speech), both in theoretical models and in clinical taxonomy, although several new developments in phonetics and phonology emphasize that phonetic substance interacts with phonological structure. Yet, these developments have not yet found their way into neurolinguistic theories of sound production impairment. This chapter is focused on the question of whether modern theories of aphasic phonological impairment provide appropriate frameworks to study the potential links or divides between phonetic and phonological sound production impairment. Three different accounts are discussed: (1) the phonological mind theory proposed by Berent (2013), (2) connectionist theories in the tradition of Dell (1986), and (3) optimality theory and harmonic grammar approaches developed by Prince and Smolensky (2004) as well as Smolensky and Legendre (2006). A common property of these frameworks is that they lack any theoretical handle by which a potential role of phonetic substance in the origin of phonological errors can be grasped. As a consequence, these theories are inappropriate to illuminate the pathomechanisms underlying phonological impairment in aphasia. In a final section, several approaches are introduced that appear more promising in this regard. Speaking is one of the most complex and, at the same time, one of the easiest of all human motor activities. On the one hand, it is tremendously complex, because hundreds of muscle commands need to be edited by the brain every second when people speak (Lenneberg, 1967), and these activities need to be synchronized with the cognitive and linguistic processes associated with spoken language production (Levelt, 1989). Speaking is easy, on the other hand, because every healthy child can learn it effortlessly and— unlike writing—without any specific education, and because people do not waste even the slightest attention to the movements of their tongues and lips when they talk. Speaking may accompany almost all other cognitive or motor activities of daily life, without any effort and without committing noticeable numbers of errors. 2 SPEECH MOTOR CONTROL IN NORMAL AND DISORDERED SPEECH Patterns of Sound Production Impairment After Left Hemisphere Stroke Patients with a left hemisphere stroke may lose this capacity. When they speak, their words sound distorted, sometimes beyond being recognizable. One particular subpopulation of these patients, usually those with a lesion in the anterior portion of the language area, suffers from a disorder termed apraxia of speech (AOS). These patients appear to have lost the ease of speaking and seem to be helplessly confronted with the plethora of muscle contractions that need to be controlled for the production of a simple word or phrase, although they still seem to “know how the words should sound” (Ziegler, Aichert, & Staiger, 2012). Their speech is halting and effortful, often with problems initiating an utterance, with groping movements, multiple false attempts, and self-corrections. They make many speech errors1 that often lead to phonetically distorted sound productions. Auditory repetition of the German word wald (English = forest) may for instance sound like /valt/→ [ffːː.. ɰalːtˢ]. (1) Often these patients may also commit phonemic errors of a well-articulated quality, such as /bluːmə/ → [bRuːmə] (English = flower). (2) Because they know from life-long experience how easy it has always been to talk, they are desperate when facing this awful struggling and groping for every single syllable after their stroke. Patients with AOS are generally assumed to suffer from phonetic planning impairment (e.g., Ziegler, 2002). A different group of stroke patients, usually with lesions in more posterior perisylvian (i.e., temporo-parietal) regions of the left hemisphere, may experience striking sound production problems as well, yet of a different kind: Their speech is much more fluent most of the time, and their output sounds largely well-articulated, despite the frequent occurrence of phonemic errors of the sort mentioned earlier (see Example 2). Quite complex errors may occur as well. As an example, a patient with a left inferior parietal stroke produced, in an auditory repetition task, the following errors (excerpted from a longer list): /fRɔʃ/ → [ʃɔʃ] (English = frog) (3) /RIŋ/ → [nIl] (English = ring) /dax/ → [ʃtɔtʃ] (English = roof) /pRInt͡ sɛsIn/ → [mɛlˈtIkin] (English = princess). This condition is called (aphasic) phonological impairment. It is one of the most frequent symptoms found in patients with aphasia and may occur in virtually every aphasia syndrome. Patients with this impairment usually lack the conspicuous groping movements of the articulators seen in the apraxic speakers—that is, errors of the sort 1 All speech errors mentioned in this chapter are authentic examples from German-speaking patients with AOS or with postlexical phonological impairment, respectively. PHONOLOGY VS. PHONETICS IN SPEECH SOUND DISORDERS 3 (see Example 1) mentioned earlier would be very atypical. Moreover, they often seem to be much less concerned about their failures or, in some cases, even be unaware of them. Patients who have predominantly this symptom—with otherwise good comprehension, preserved word retrieval, and preserved syntax—are clinically classified as conduction aphasic; however, this constellation is rare. More often phonological impairment is accompanied by impaired comprehension, and in extreme cases, it may present as a largely incomprehensible “phonemic jargon” (Butterworth, 1992). In the cognitive neurolinguistic tradition, phonological impairment is subdivided into a lexical subtype and a postlexical subtype, and it is ascribed to a processing level where phonological form information is retrieved (Goldrick & Rapp, 2007; Schwartz, 2014). The Motor–Linguistic Dualism of Sound Production Impairment Of note, aphasiologists have construed a deep chasm between phonological impairment and AOS by classifying the former as an aphasic (i.e., nonmotor) symptom and the latter as a motor (i.e., nonaphasic) symptom. Being a part of aphasia, phonological impairment is allocated to the language domain and is therefore, by virtue of standard theories of aphasia (e.g., Caramazza & Zurif, 1976), considered as an amodal dysfunction of purely abstract processes and representations—that is, a disorder affecting the core of the language module. AOS, to the contrary, is said to interfere with one of the tools through which language can be implemented in the physical world—that is, the vocal tract motor system. As is taught in linguistics courses, the implements of linguistic structure—script, gesture, or speech—are insubstantial to linguistic competence; hence, the mechanisms deemed to underlie aphasic phonological impairment are entirely divorced from those underlying AOS. In this understanding, phonological impairment goes more with syntactic or semantic impairment than with AOS, because—like the former (and unlike the latter)— it is part of the language module. The concept of “pure AOS,” which has always played an important role in clinical research (e.g., Graff-Radford et al., 2014), is revealing in this regard, because it views the absence of agrammatism or writing impairment as a crucial criterion for the diagnosis of AOS and as an argument against a linguistic speech problem—not considering that phonological impairment can be dissociated from other aphasic symptoms as well (for a discussion, cf. Ziegler, 2002). The virtually insurmountable divide between a phonological and a phonetic impairment level has created models in which the processing of linguistic units, including phonological representations, is encapsulated from the physical implement of spoken language. In most cognitive neurolinguistic models, speech is an unspecified appendix to lexical and response buffer modules whose properties are uninfluenced by the properties of the speech motor system (e.g., Patterson & Shewell, 1987). Likewise, Dell’s (1986) spreading activation model of word production ends where the motor processes of speaking begin. Phonetic implementation in these models is not only strictly feedforward, but the two parts of the implementation mechanism are even entirely alien to each other. In clinical practice, this taxonomic situation creates high pressure on clinicians in their diagnostic decision about whether a patient’s speech sound impairment is apraxic 4 SPEECH MOTOR CONTROL IN NORMAL AND DISORDERED SPEECH or phonologic, because in making this decision, they need to balance between two fundamentally distinct worlds—movement and language. The situation can become rather challenging, because AOS and phonological impairment (especially of the conduction aphasia type) may appear very similar on the surface, and because the two impairments are influenced by similar critical variables—for example, word length, lexical frequency, or phonological complexity. Therefore, the requirement to deliver a judgment of such primal taxonomic importance and theoretical momentousness is often at variance