Running Head: NEUROCOGNITIVE CROSSMODAL DEFICIT in DYSLEXIA 1
Total Page:16
File Type:pdf, Size:1020Kb
Running head: NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 1 Neurocognitive Bases of Crossmodal Integration Deficit in Dyslexia Margaret M. Gullicka,b and James R. Bootha,c aRichard & Roxelyn Pepper Department of Communication Sciences & Disorders, Northwestern University; bCenter for Human Services Research, University at Albany, State University of New York; cDepartment of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA This article has been accepted for publication in The Wiley Handbook of Developmental Disorders, edited by Guinevere Eden. Full citation is below: Gullick, Margaret M., & Booth, James R. (In Press) Neurocognitive Bases of Crossmodal Integration Deficit in Dyslexia. In G. Eden (Ed.). The Wiley Handbook of Developmental Disorders. Correspondence for this article should be addressed to James R. Booth (email: [email protected]) NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 2 Introduction Crossmodal integration is a critical component of successful reading, and yet it has been less studied than reading’s unimodal subskills. Proficiency with the sounds of a language (i.e., the phonemes) and with the visual representations of these sounds (graphemes) are both important and necessary precursors for reading (see Chapters 8 and 9), but the formation of a stable integrated representation that combines and links these aspects, and subsequent fluent and automatic access to this crossmodal representation, is unique to reading and is required for its success. Indeed, individuals with specific difficulties in reading, as in dyslexia, demonstrate impairments not only in phonology and orthography but also in integration. Impairments in only crossmodal integration could result in disordered reading via disrupted formation of or access to phoneme–grapheme associations. Alternately, the phonological deficits noted in many individuals with dyslexia may lead to reading difficulties via issues with integration: children who cannot consistently identify and manipulate the sounds of their language will also have trouble matching these sounds to their visual representations, resulting in the manifested deficiencies. We here discuss the importance of crossmodal integration in reading, both generally and as a potential specific causal deficit in the case of dyslexia. We examine the behavioral, functional, and structural neural evidence for a crossmodal, as compared to unimodal, processing issue in individuals with dyslexia in comparison to typically developing controls. We then present an initial review of work using crossmodal- versus unimodal-based reading interventions and training programs aimed at the amelioration of reading difficulties. Finally, we present some remaining questions reflecting potential areas for future research into this topic. Crossmodal integration in reading NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 3 Multisensory processing is thought to take place given the perception of stimuli for which information from multiple modalities appears to co-occur and a causal (Seilheimer, Rosenberg, & Angelaki, 2014) or congruent (Alais, Newell, & Mamassian, 2010) relationship between modalities can be assumed. Such is the case of audiovisual speech perception, in which the association between lip movements and speech sounds is natural and thus congruency or causality can be automatically inferred. As another audiovisual phenomenon, reading may seem to be parallel to speech perception, but there are significant differences between these two abilities that may explain why reading is especially impaired by crossmodal processing deficits. Reading is a unique case of crossmodal processing wherein associations between information from different modalities must first be explicitly learned and then later applied when only one modality is present. First, reading relies on arbitrary phoneme–grapheme associations that must be memorized. The acquisition of these reading-related crossmodal representations may be different than for other stimuli, given the artificial nature of orthography. Auditory and visual speech information have an ecologically valid relationship, demonstrating both temporal synchrony and a meaningful (causal) relationship. Letter–sound mappings involve reasonably consistent co- occurrence but have no natural relationship with their sounds. Again, temporal co-occurrence may not be sufficient for crossmodal representation creation if a meaningful association between the information streams is not perceived. As such, linking the two modalities cannot depend simply on simultaneity but requires the recognition, from explicit instruction, of a consistent relationship to build this crossmodal congruence. Reading may thus require more top-down than bottom-up attention for this mapping creation: Crossmodal links may not be initially automatically perceived, and so attention is needed to shape and select these relationships NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 4 (Talsma, Senkowski, Soto-Faraco, & Woldorff, 2010). The acquisition of such a crossmodal pairing may thus require initial presentation of both pieces of information within a reasonable time window (temporal coincidence), recognition of their association (congruence), and the creation of a stable integrated representation. Second, reading typically involves the presentation of only visual information, which is necessary but not entirely sufficient for successful performance. Appropriate auditory (phonological) information must be internally activated from previously created crossmodal representations and then applied to the visual items presented. In contrast, in speech, one stream (auditory) is both necessary and sufficient for perception, with the other (visual) providing redundant—or, in some circumstances, complementary—information; participants’ generation of the non-presented modality is not a requirement for successful perception like in reading. Computational models of multisensory integration have so far focused on potential principles underlying the processing of convergent external crossmodal presentations and not these internal activations (see Cuppini, Magosso, & Ursino, 2011; Martin, Meredith, & Ahmad, 2009); thus, they may be applicable to early letter–sound association acquisition but not their internal transformation. As such, there may be two distinct stages of multisensory processing for reading. First, crossmodal associations between letters and their sounds must be learned or acquired through simultaneous audiovisual presentation, such as an instructor pointing to a letter and enunciating its name or sound. Second, this information must then be applied when only unimodal information is presented, as in typical reading practice. Most work has so far focused on either the importance of acquiring these integrated crossmodal representations in children or on the fluent implementation of learned crossmodal associations in older participants, with findings NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 5 from both lines of research supporting the idea of a deficit in dyslexia; the two stages have not yet been specifically compared in order to determine if one is particularly causative in the manifested reading difficulties or if both are implicated. We here review the literature demonstrating significant and specific deficits in crossmodal processing at either stage in individuals with dyslexia, including the brain areas underlying these differences, in a variety of tasks, and posit several implications and questions arising from this body of work. Computational models of crossmodal integration in reading In computational models, crossmodal integration is implemented at the stage of internal translation between the presented graphemic information and the implicit phonological information that needs to be recovered from this code. The connectionist “triangle” model of reading (Harm & Seidenberg, 1999) explicitly includes such a stage, and direct or indirect “damage” to this module has significant negative consequences for reading outcomes (please see Chapter 3 by Plaut on neurocomputational principles of reading). In its most basic form, the triangle model includes orthographic, phonological, and semantic modules, each connected by some a posteriori nodes that learn the relationships between shapes, sounds, and meanings. Orthography and phonology are thus related by a series of “hidden units,” which code letter– sound and letter group (e.g., rime)–sound mappings. These units allow the network to learn and acquire new letter–sound correspondences or rime families, and then to generalize to novel stimuli. Exception words are posited to still involve relationships at least at the single-letter level. Harm, McCandliss, and Seidenberg (2003; see also Harm & Seidenberg, 1999) demonstrated that deficits as found in developmental dyslexia may be simulated via “damage” to either the phonological module or to the hidden units themselves. First, there may be “phonological damage,” similar to poor phonological awareness or phonological working NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 6 memory, leading to under-specified auditory phonological representations. Such an impairment causes the model to have less sensitivity to smaller units of language, and thus to represent fewer individual sounds or to have less stable representations. Instead, only larger-grain items like whole words may be stored. This reduced phonological sensitivity results in fewer and more poorly specified phoneme–grapheme mappings: it is