Running head: NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 1

Neurocognitive Bases of Crossmodal Integration Deficit in Dyslexia

Margaret M. Gullicka,b and James R. Bootha,c

aRichard & Roxelyn Pepper Department of Communication Sciences & Disorders, Northwestern

University; bCenter for Human Services Research, University at Albany, State University of New

York; cDepartment of Psychology and Human Development, Vanderbilt University, Nashville,

TN, USA

This article has been accepted for publication in The Wiley Handbook of Developmental

Disorders, edited by Guinevere Eden. Full citation is below:

Gullick, Margaret M., & Booth, James R. (In Press) Neurocognitive Bases of Crossmodal

Integration Deficit in Dyslexia. In G. Eden (Ed.). The Wiley Handbook of Developmental

Disorders.

Correspondence for this article should be addressed to James R. Booth (email: [email protected]) NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 2 Introduction

Crossmodal integration is a critical component of successful reading, and yet it has been less studied than reading’s unimodal subskills. Proficiency with the sounds of a language (i.e., the phonemes) and with the visual representations of these sounds (graphemes) are both important and necessary precursors for reading (see Chapters 8 and 9), but the formation of a stable integrated representation that combines and links these aspects, and subsequent fluent and automatic access to this crossmodal representation, is unique to reading and is required for its success. Indeed, individuals with specific difficulties in reading, as in dyslexia, demonstrate impairments not only in phonology and orthography but also in integration. Impairments in only crossmodal integration could result in disordered reading via disrupted formation of or access to phoneme–grapheme associations. Alternately, the phonological deficits noted in many individuals with dyslexia may lead to reading difficulties via issues with integration: children who cannot consistently identify and manipulate the sounds of their language will also have trouble matching these sounds to their visual representations, resulting in the manifested deficiencies. We here discuss the importance of crossmodal integration in reading, both generally and as a potential specific causal deficit in the case of dyslexia. We examine the behavioral, functional, and structural neural evidence for a crossmodal, as compared to unimodal, processing issue in individuals with dyslexia in comparison to typically developing controls. We then present an initial review of work using crossmodal- versus unimodal-based reading interventions and training programs aimed at the amelioration of reading difficulties. Finally, we present some remaining questions reflecting potential areas for future research into this topic.

Crossmodal integration in reading NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 3 Multisensory processing is thought to take place given the of stimuli for which information from multiple modalities appears to co-occur and a causal (Seilheimer, Rosenberg,

& Angelaki, 2014) or congruent (Alais, Newell, & Mamassian, 2010) relationship between modalities can be assumed. Such is the case of audiovisual , in which the association between lip movements and speech sounds is natural and thus congruency or causality can be automatically inferred. As another audiovisual phenomenon, reading may seem to be parallel to speech perception, but there are significant differences between these two abilities that may explain why reading is especially impaired by crossmodal processing deficits.

Reading is a unique case of crossmodal processing wherein associations between information from different modalities must first be explicitly learned and then later applied when only one modality is present.

First, reading relies on arbitrary phoneme–grapheme associations that must be memorized. The acquisition of these reading-related crossmodal representations may be different than for other stimuli, given the artificial nature of orthography. Auditory and visual speech information have an ecologically valid relationship, demonstrating both temporal synchrony and a meaningful (causal) relationship. Letter–sound mappings involve reasonably consistent co- occurrence but have no natural relationship with their sounds. Again, temporal co-occurrence may not be sufficient for crossmodal representation creation if a meaningful association between the information streams is not perceived. As such, linking the two modalities cannot depend simply on simultaneity but requires the recognition, from explicit instruction, of a consistent relationship to build this crossmodal congruence. Reading may thus require more top-down than bottom-up attention for this mapping creation: Crossmodal links may not be initially automatically perceived, and so attention is needed to shape and select these relationships NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 4 (Talsma, Senkowski, Soto-Faraco, & Woldorff, 2010). The acquisition of such a crossmodal pairing may thus require initial presentation of both pieces of information within a reasonable time window (temporal coincidence), recognition of their association (congruence), and the creation of a stable integrated representation.

Second, reading typically involves the presentation of only visual information, which is necessary but not entirely sufficient for successful performance. Appropriate auditory

(phonological) information must be internally activated from previously created crossmodal representations and then applied to the visual items presented. In contrast, in speech, one stream

(auditory) is both necessary and sufficient for perception, with the other (visual) providing redundant—or, in some circumstances, complementary—information; participants’ generation of the non-presented modality is not a requirement for successful perception like in reading.

Computational models of have so far focused on potential principles underlying the processing of convergent external crossmodal presentations and not these internal activations (see Cuppini, Magosso, & Ursino, 2011; Martin, Meredith, & Ahmad, 2009); thus, they may be applicable to early letter–sound association acquisition but not their internal transformation.

As such, there may be two distinct stages of multisensory processing for reading. First, crossmodal associations between letters and their sounds must be learned or acquired through simultaneous audiovisual presentation, such as an instructor pointing to a letter and enunciating its name or sound. Second, this information must then be applied when only unimodal information is presented, as in typical reading practice. Most work has so far focused on either the importance of acquiring these integrated crossmodal representations in children or on the fluent implementation of learned crossmodal associations in older participants, with findings NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 5 from both lines of research supporting the idea of a deficit in dyslexia; the two stages have not yet been specifically compared in order to determine if one is particularly causative in the manifested reading difficulties or if both are implicated. We here review the literature demonstrating significant and specific deficits in crossmodal processing at either stage in individuals with dyslexia, including the brain areas underlying these differences, in a variety of tasks, and posit several implications and questions arising from this body of work.

Computational models of crossmodal integration in reading

In computational models, crossmodal integration is implemented at the stage of internal translation between the presented graphemic information and the implicit phonological information that needs to be recovered from this code. The connectionist “triangle” model of reading (Harm & Seidenberg, 1999) explicitly includes such a stage, and direct or indirect

“damage” to this module has significant negative consequences for reading outcomes (please see

Chapter 3 by Plaut on neurocomputational principles of reading). In its most basic form, the triangle model includes orthographic, phonological, and semantic modules, each connected by some a posteriori nodes that learn the relationships between shapes, sounds, and meanings.

Orthography and phonology are thus related by a series of “hidden units,” which code letter– sound and letter group (e.g., rime)–sound mappings. These units allow the network to learn and acquire new letter–sound correspondences or rime families, and then to generalize to novel stimuli. Exception words are posited to still involve relationships at least at the single-letter level.

Harm, McCandliss, and Seidenberg (2003; see also Harm & Seidenberg, 1999) demonstrated that deficits as found in developmental dyslexia may be simulated via “damage” to either the phonological module or to the hidden units themselves. First, there may be

“phonological damage,” similar to poor phonological awareness or phonological working NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 6 memory, leading to under-specified auditory phonological representations. Such an impairment causes the model to have less sensitivity to smaller units of language, and thus to represent fewer individual sounds or to have less stable representations. Instead, only larger-grain items like whole words may be stored. This reduced phonological sensitivity results in fewer and more poorly specified phoneme–grapheme mappings: it is significantly more difficult for the model to learn crossmodal letter–sound mappings when representations of the sounds themselves are more fragile. Crossmodal representations may then be created only for whole words, potentially without the internal reinforcing ties to words with similar onset or rime segments which allow for rule generalization. Under this computational circumstance, both phonological processing and reading are impaired, with performance particularly reduced for nonword or novel word reading. Importantly, modeled “interventions” that only addressed the causative phonological

“damage” were not effective in remediating the reading impairments seen (at least after any orthographic exposure), as they did not specifically repair the integrated crossmodal representations; in contrast, phonics-based “interventions” that included letter–sound mappings showed model recovery almost up to typical levels even after a significant delay in implementation (Harm, McCandliss, & Seidenberg, 2003). Thus, even if phonological processing demonstrates improved small-grain sensitivities, the hidden unit crossmodal mappings may remain only at large grain sizes without explicit letter–sound instruction.

An alternate computational model of reading impairment involves leaving the phonological module intact but reducing the number of hidden units between orthography and phonology. This type of damage is one way to simulate a reading delay, as may be seen in surface dyslexia. Without a sufficient array of hidden units, crossmodal mappings are more difficult to acquire and fewer are stored. Individual phoneme–grapheme associations may still be NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 7 learned, as these relationships are generally acquired first, but the larger grain size representations may then suffer. Such damage appears to result in especially pronounced deficits in exception word reading: these items do not have regular phoneme–grapheme correspondences and must be learned at a whole-word level. These mappings must be uniquely coded in the hidden units, but these units are severely limited. Novel word reading may also be somewhat impaired, as even rime family representations may be less overlapping and thus less generalizable. Interventions that are aimed at increasing the quality and quantity of these hidden unit mappings are most effective at improving reading outcomes.

While the phonological, orthographic, and multisensory hidden unit modules of the connectionist model are of obvious importance in dyslexia, semantics may also play an important role for system modulation or compensation, including in reading-related crossmodal processing.

Semantic knowledge may provide some support for creating or accessing internal integrated representations, especially for irregular or exception words, as demonstrated by the frequency- by-regularity interaction. Reading of high-frequency exception words, whose frequency and familiarity may allow for rapid access, is typically only slightly slower than that of regular words, whereas low-frequency exception word reading is markedly impacted. This effect is exaggerated in individuals with semantic dementia, demonstrating that without semantic support, reading of low-frequency exception words that must be memorized and recognized as whole units is harder (Woollams, Ralph, Plaut, & Patterson, 2007). In contrast, dyslexia may involve intact semantics but impaired phonological and/or crossmodal processing. It is thus possible that relatively strong semantic skill could be used as compensation for weakened crossmodal representations to bolster reading, especially for words that place greater demands on phonological and orthographic mapping like exception words. Individuals with dyslexia do show NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 8 increased semantic priming effects (Plaut & Booth, 2000), demonstrating an increased influence of semantics on reading. Semantic knowledge may thus be useful in intervention, as it may be less affected by crossmodal processing difficulties.

Integrated letter–sound mappings are likely found at the stage between orthographic and phonological processing, and may be built up given reasonable unimodal input and explicit instruction. Typical crossmodal representations likely include both individual letter–sound and larger rime family or whole word representations, both of which are needed for novel word (or nonword) and exception word reading. Access to these crossmodal representations should become automatic for good or practiced readers, allowing fluent translation between modalities.

Finally, reading deficits from multiple causes are best addressed by systematic improvement of these integrated crossmodal relationships after print exposure has begun, for example, even from a relatively early start point, more effectively than phonology-only practice.

Behavioral evidence for importance of crossmodal integration in reading

Multiple skills have been found to be significantly predictive of future reading ability, including unimodal perceptual sensitivity (dynamic auditory and visual processing, see Boets,

Wouters, van Wieringen, De Smedt, & Ghesquiere, 2008), but crossmodal integration ability is an early (and may be the primary) cognitive predictor of reading across the pre-elementary and elementary years. Young children’s crossmodal fluency has often been tested through their ability to identify or name letters in a different modality than presented (e.g., orally reading printed letters or recognizing which printed letter matches an auditory presentation). This letter- name knowledge reflects a child’s ability to link letters’ shapes with their sounds and fluently internally translate between these components. (Letter-sound knowledge is a similar skill that may precede letter-name knowledge in Britain, where pre-reading instruction focuses on such NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 9 sound associations instead of letters’ names, as in the US (Ellefson, Treiman, & Kessler, 2009), but both versions are defined by reliance on a stable reading-like association between visual and auditory information.) For example, kindergarten letter-name knowledge may be significantly predictive of first-grade letter-sound knowledge and reading skill, but phonological awareness is not (Evans, Bell, Shaw, Moretti, & Page, 2006). Similarly, kindergarten letter-naming speed was shown to be strongly associated with subsequent reading progress (Walsh, Price, & Gillingham,

1988). Foulin (2005) found letter-name knowledge in 5- to 6-year-old-children to support phonemic sensitivity and to predict phonological awareness itself. They posited that letter-name knowledge “…allows children to bridge the gap from visual-cue strategy to phonetic-cue strategy in early literacy, laying the foundation for formal literacy” (pp. 135); that is, letter-name knowledge served to support the crossmodal link between phonemes and graphemes, which is critical for future reading performance. Such automaticity in letter–name or letter–sound translation may continue to develop over years, with typically developing children’s response times improving through elementary school (Froyen, Bonte, van Atteveldt, & Blomert, 2009), resulting in the mature fluent use of this crossmodal information for adult skilled reading.

Crossmodal letter skill may also be captured in part by measures of rapid automatized naming (RAN). RAN reflects skill in rapidly (orally) naming visual stimuli (including letters), requiring fluent visual-to-verbal translation; indeed, RAN scores and reading fluency tend to be positively correlated, at least for early readers (Swanson, Trainin, Necoechea, & Hammill, 2003; see Chapter 17 for more detail). However, RAN may only weakly predict silent reading ability, which still requires crossmodal translation between graphemes and phonemes (Papadopoulos,

Spanoudis, & Georgiou, 2016): thus, oral fluency is not synonymous with decoding ability.

Additionally, while RAN may indicate skill in retrieving cross-modality information, it does not NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 10 reflect the ability to form such linkages. Learning the relationship between letters and sounds is a crucial step in reading acquisition: this process relies on crossmodal integration, but not yet on speeded retrieval, corroborating the idea that these skills are not equivalent. Behavioral evidence for specific deficit in crossmodal integration in reading dyslexia

While typically developing children are building and reinforcing these letter–sound associations, children at risk for dyslexia demonstrate hampered integration abilities, and individuals diagnosed with dyslexia continue to show specific deficits in this skill. Importantly, this difficulty may be particularly expressed in timed or speeded tasks: two studies have noted typical accuracies for letter–sound matchings between children with dyslexia and control peers, but comparatively slowed response times (Froyen, Willems, & Blomert, 2011) or increased error rates and fewer words decoded in timed tasks (Aravena, Snellings, Tijms, & van der Molen,

2013) for dyslexics. As such, use of these letter–sound associations may not be as fluent as in typical development.

Snowling (1980) demonstrated a deficit in nonword recognition in individuals with dyslexia. More specifically, participants were presented with visual or auditory nonwords and were asked to determine whether a second presentation in the same or in the converse modality matched (was the same nonword) or did not match. Typically developing and dyslexic group accuracies did not differ on unimodal (same modality) recognitions, but children with dyslexia were significantly impaired on visual-then-auditory nonword recognition. While control participants showed increasing scores with age, likely as they came to fluently implement grapheme–phoneme correspondences to decode and match the nonwords, children with dyslexia did not improve with age. Snowling hypothesized that dyslexics’ poor performance may be due to specific difficulties in internal grapheme–phoneme conversion and decoding. Interestingly, NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 11 Harrar, et al. (2014) also noted a particular impairment on visual-to-auditory responses (in comparison to auditory-to-visual) in individuals with dyslexia, referring to it as a “sluggish attention shifting in one direction”. These visual-then-auditory deficits are consistent with a difficulty in reading, which requires the same conversion direction.

Individuals with dyslexia also demonstrated a weakened influence of orthography on unimodal phonological processing relative to their typically developing peers (Landerl, Frith, &

Wimmer, 1996; see also Wimmer and Richlan Chapter 18), potentially because they have less integrated orthographic and phonological representations (Meyler & Breznitz, 2003). Further, both children and adults with dyslexia have impaired crossmodal (visual-verbal) paired-associate learning for words and nonwords, in comparison to unimodal pairs (see Hulme, Goetz, Gooch,

Adams, & Snowling, 2007; Messbauer & de Jong, 2003). These results point to a lessened automaticity in forming and subsequently accessing integrated representations. Discrimination of external crossmodal information may also be impaired, as children with dyslexia required lengthened stimulus-onset asynchronies for nonlinguistic audiovisual, but not unimodal visual or auditory, trials to reject the idea that item presentation was synchronous (Laasonen, Tomma-

Halme, Lahti-Nuuttila, Service, & Virsu, 2000). In addition, only difficulties in letter–sound association (i.e., integration) were predictive of the development of first-grade reading difficulties in a group of at-risk kindergarteners; phonological working memory, lexical processing, and awareness were not predictive (Blomert & Willems, 2010). Audiovisual associative ability, whether for letters and sounds, words, or abstract non-linguistic stimuli, thus appears to both reflect crossmodal integration skill and predict reading performance.

Neuroimaging of crossmodal integration in reading NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 12 Given this behavioral literature, it is clear that crossmodal integration is a critical component of reading and that individuals with dyslexia demonstrate significant, and potentially causal, deficits in this domain. The neural underpinnings of this crossmodal deficit specifically and crossmodal processing more generally have been examined using functional, and more recently structural, neuroimaging techniques. Activity in key regions and informative evoked potential components have been identified and contrasted between adult and child typical and dyslexic participants in tasks comparing unimodal and crossmodal integrative processing. It is important to note that any method aimed at studying multisensory integration in the brain, or defining areas particularly sensitive to multisensory information, will have particular caveats: for example, a criterion of superadditivity in functional MRI (fMRI) may accurately reflect areas involved in integration, but may be too conservative to find all such areas (see Stevenson et al.,

2014 for review). The work reviewed here varies in how “integration” or multisensory sensitivity is defined between studies, but the regions and components discussed are consistently important in reading-related integration across techniques.

Posterior superior temporal sulcus and planum temporale

The bilateral posterior superior temporal sulcus (pSTS) and/or the planum temporale (PT) are thought to be particularly important in the use of integrated heteromodal representations. The pSTS and PT have been proposed as a particular multimodal “convergence zone” for integrated information, whether in crossmodal association acquisition, access, or storage. Increased pSTS activity has been found for linguistic and non-linguistic crossmodal information processing, including audiovisual versus unimodal speech, and images of objects and their associated sounds presented together versus separately (see Beauchamp, Lee, Argall, & Martin, 2004; Calvert,

2001 for review; Werner & Noppeney, 2010). Thus, it is not surprising that this region also NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 13 demonstrates particular sensitivity to the combination of orthography and phonology, or auditory and visual reading information. A distinct crossmodal-sensitive region is consistent with computational results from a self-organizing model of multisensory integration: bimodal inputs generated strong activations in a unique region between the unimodal foci (here, auditory and visual cortices; see Martin, Meredith, & Ahmad, 2009); indeed, such activity may be greater than for the unimodal stimuli themselves (i.e., multisensory enhancement).

van Atteveldt and colleagues have systematically demonstrated audiovisual activity in these areas with simple letter-sound stimuli in typically developing participants. Adults asked to passively listen to letter and sound presentations demonstrated activity in the bilateral pSTS and

PT for unimodal trials but showed significantly increased activity for simultaneous bimodal stimulus presentations (van Atteveldt, Formisano, Goebel, & Blomert, 2004). Specifically, congruent letter-sound trials show enhanced activity, and incongruent suppressed activity, in both synchronous (van Atteveldt, Formisano, Goebel, & Blomert, 2007) and asynchronous (van

Atteveldt, Formisano, Blomert, & Goebel, 2007) presentations (see Figure 11.1). NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 14 Figure 11.1: The bilateral posterior STS/STG (in green) is sensitive to unimodal (red, green lines) letter and sound presentations, but is more responsive to bimodal incongruent (yellow) and especially bimodal congruent (blue) presentations. The bilateral PT/HS (Heschl's sulcus, in orange) is sensitive to unimodal auditory (red line) stimuli, but is more responsive to bimodal congruent (blue) stimuli. PP = planum polare. Source: van Atteveldt et al. 2004, Figure 2, p. 274. Reproduced with permission from Cell Press/ Elsevier).

Identification of the auditory stimulus in congruent audiovisual presentations was speeded as compared to unimodal presentation identification, indicating an automatic influence of the task- irrelevant external visual information on auditory processing via these integrated crossmodal representations (Blau, van Atteveldt, Formisano, Goebel, & Blomert, 2008).

Individuals with dyslexia do not show this crossmodal enhancement in the pSTS or PT.

Unimodal letter or sound presentations evoked temporal and occipitotemporal activities similar to those of control participants, but adults with dyslexia did not show temporoparietal increases for congruent crossmodal presentations, nor was there a decrease for incongruent items. This absence of any crossmodal activity modulation may indicate that dyslexic adults do not automatically access integrated letter-sound representations, unlike typical adults, potentially because their internal crossmodal representations are weaker (Blau, van Atteveldt, Ekkebus,

Goebel, & Blomert, 2009). Individuals with dyslexia also demonstrate decreased left inferior parietal lobule (IPL) activity specifically for crossmodal trials in a lexical decision task as compared to controls; further, this activity was significantly correlated with standardized spelling ability, wherein higher activity was associated with better spelling (Kast, Bezzola, Jancke, &

Meyer, 2011). Individuals who may be less able to draw on integrated representations thus demonstrated decreased temporoparietal activity, potentially indicating decreased sensitivity to multimodal stimuli in a variety of tasks. NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 15 Children with dyslexia also fail to show an influence of crossmodal information on activity and performance as compared to their typically developing peers. Dyslexic children did not show increased pSTS or PT activity for congruent bimodal letter-sound presentations versus incongruent or unimodal and further showed response reductions to crossmodal stimuli in unimodal visual (fusiform gyrus, FG) and auditory (anterior STG, Heschl’s gyrus) regions (Blau et al., 2010; see Figure 11.2).

Figure 11.2: While typically developing children demonstrate greater activity in the left PT for bimodal (or multisensory, MS) congruent (purple) than incongruent (green) letters and sounds, children with dyslexia do not show such congruency effects. Source: Blau et al. 2010, Figure 3A, p. 874. Reproduced with permission of Oxford University Press.

Crossmodal association usage may thus be “an emergent property of learning to read that develops inadequately in dyslexic readers, presumably as a result of a deviant interactive specialization of neural systems for processing auditory and visual linguistic inputs” (Blau et al.,

2010, p. 878). Dyslexic children may be less sensitive to external crossmodal presentations and thus fail to develop robust internal integrated letter–sound associations, leading to reading difficulties that continue into adulthood.

Crossmodal activity decreases associated with reading deficits have also been demonstrated in active tasks using stimuli more complicated than individual letters. McNorgan,

Awati, Desroches, and Booth (2014) examined the relationship between reading skill and NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 16 congruent versus incongruent presentations of crossmodal versus unimodal stimuli. Participants were asked to perform a word rhyme judgment task in three modality conditions (unimodal visual, unimodal auditory, crossmodal audiovisual). Congruent trials were ones in which the paired words matched on both phonology and orthography (e.g., rhymed and were spelled similarly); incongruent trials matched only on phonology (e.g., rhymed but were spelled differently). Children with higher reading and spelling skills showed an increased congruency effect in the PT specifically for crossmodal word judgment trials, but not for any other condition.

In a follow-up investigation, only typically developing children, and not children with dyslexia, showed a positive relationship between phonemic awareness skill (i.e., elision) and crossmodal congruent versus incongruent pSTS activity (McNorgan, Randazzo-Wagner, & Booth, 2013); neither group showed such a relationship for unimodal auditory trials. Phonemic awareness thus influenced typically developing but not dyslexic children’s sensitivity to audiovisual integration.

The authors propose that phonemic awareness’ influence on audiovisual activity may indicate the impact of auditory information on orthographic acquisition, as in learning the alphabetic principle.

Other areas implicated in crossmodal integration

While the pSTS may be the storage site for integrated heteromodal representations, allowing for fluent access to this information, the inferior frontal gyrus (IFG) may be important in the creation of such unified representations (Willems, Ozyurek, & Hagoort, 2009). Unfamiliar audiovisual pairings typically demonstrate increased IFG activity, versus increased pSTS activity for familiar associations (Hein et al., 2007), potentially allowing for the recognition of a novel combination and determination of whether these representations should be linked (see also

Calvert, 2001). Several works have demonstrated that individuals with dyslexia show decreased NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 17 activity in the IFG (see Richlan, Kronbichler, & Wimmer, 2009 for meta-analysis), especially for conflicting or non-matching presentations, though so far this activity has only been investigated within the visual modality (Bolger, Minas, Burman, & Booth, 2008; Cao, Bitan, Chou, Burman,

& Booth, 2006). Dynamic causal modeling found that typically developing children show greater modulatory effects of IPL activity on the IFG for trials in which orthography and phonology conflicted than for matching trials, while dyslexic participants did not show such a condition difference (Cao, Bitan, & Booth, 2008), perhaps implying that children with dyslexia have deficient integrated representations and are thus less influenced by crossmodal conflict. The IPL and the IFG thus appear to be sensitive to crossmodal processing in typical development, but not in adults or children with dyslexia. This undifferentiated crossmodal versus unimodal responsivity likely impacts the acquisition of and ability to implement grapheme–phoneme knowledge.

Other regions, such as the FG, may also be influenced by information from a complimentary modality, more so in typical than atypical development. The FG has been proposed to be fundamentally multisensory in nature (Price & Devlin, 2003), though within work on reading it is also thought of as particularly responsive to letter combinations (Dehaene, Le

Clec'H, Poline, Le Bihan, & Cohen, 2002; McCandliss, Cohen, & Dehaene, 2003). In either case, good readers’ FG activities do appear to be modulated by non-visual information, whether presented alone or in combination with visual stimuli. Unimodal auditory-only word processing has been demonstrated to impact FG activity in typical but not dyslexic readers (Bolger et al.,

2008; Desroches et al., 2010; Richlan, 2012). FG activity may also be enhanced for congruent visual stimuli in an auditory task, indicating an automatic influence between modalities (Blau et al., 2008). Further, these effects may be graded by skill, as better readers show stronger NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 18 crossmodal suppression in the FG (McNorgan & Booth, 2015). Learning to read may thus drive the FG’s specialization for orthography but also promote this crossmodal sensitivity. Indeed, the mid-FG’s print sensitivity appears along the same timeline as letter–speech-sound association learning: kindergarten phoneme–grapheme training leads to longitudinal increases in FG activity for words (Brem et al., 2010), indicating not only the importance of the acquisition of each modality’s information but also the crossmodal linkage for this activity. Together, this work may be taken as evidence of a more automatic use of crossmodal information, or more developed integrated representations themselves, in good as compared to poorer readers, even when not required by the task or the stimuli themselves.

Structurally, the arcuate fasciculus (AF) is a strong candidate for a tract likely to be important for crossmodal as compared to unimodal processing. The AF arcs from the temporal to inferior frontal lobe, thus connecting two major integrative regions. Multiple previous studies

Figure 11.3: Individuals with higher crossmodal (audiovisual) word rhyming activity in the posterior superior temporal sulcus demonstrate increased fractional anisotropy in frontal and parietal clusters along the AF. No significant relationships between unimodal visual or auditory posterior STS activity and AF fractional anisotropy were found. Source: Gullick & Booth 2014, Figure 5, p. 1340. Reproduced with permission of MIT Press Journals. NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 19

have shown AF coherence or connectivity to be lessened in individuals with dyslexia (Klingberg et al., 2000; Rimrodt, Peterson, Denckla, Kaufmann, & Cutting, 2010; Vandermosten, Boets,

Poelmans, et al., 2012; Vandermosten, Boets, Wouters, & Ghesquiere, 2012; see Chapter 6 by

Eckert for more details) or, more broadly, participants with poorer reading-related skills

(Beaulieu et al., 2005; Gold, Powell, Xuan, Jiang, & Hardy, 2007; Yeatman, Dougherty, Ben-

Shachar, & Wandell, 2012). Importantly, coherence in the AF was shown to have a significant positive relationship with children’s crossmodal (audiovisual) word rhyming activity in the pSTS, but no significant relationship with unimodal (auditory or visual) pSTS word rhyming activity, demonstrating specificity in the type of processing supported (Gullick & Booth, 2014, see Figure 11.3). These group-level coherence differences describe an underlying crossmodal- specific deficit in dyslexic readers. Further, tractographic segmentation of the AF into component subsections demonstrates that connectivity specifically along the direct section, which completes the temporal to frontal arc and thus is likely to particularly subserve this crossmodal functionality (Catani, Jones, & Ffytche, 2005; Vandermosten, Boets, Poelmans, et al., 2012), is uniquely predictive of both reading outcome scores and change in reading standard scores over a three-year interval (Gullick & Booth, 2015). The AF’s importance in reading- related crossmodal processing can thus be seen both initially and longitudinally.

Event-related potentials of crossmodal integration in reading

MRI work has thus established both group and individual differences particularly for crossmodal processing in especially the posterior STS and PT, though potentially also in the IFG,

FG, and AF. Event-related potentials, though, can demonstrate the timing of these divergences through similar modality comparisons. Generally, there appear to be both early and later NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 20 differences in responses to crossmodal information between reading groups, reflecting potential impairments at multiple stages of processing. For example, adults with dyslexia may not demonstrate as much or any benefit from the presence of synchronous auditory stimuli in the processing of visual information in both earlier (170-240 msec) and later (post-450 msec) time periods, as do typically developing adults (Sela, 2014). This group difference demonstrates that the influence of one modality on the processing of the other is not equivalent between groups, potentially resulting in the integration difficulties seen in reading.

The mismatch negativity (MMN) is a prominent negative-going waveform, peaking at around 170 ms for simple auditory stimuli in adults. It is thought to index automatic evaluation of whether a stimulus is expected, or matches, its context: “deviant” stimuli, or unexpected nontarget sounds, evoke a greater potential approximately 170 msec after their presentation

(Luck & Kappenman, 2011). Froyen, Van Atteveldt, Bonte, and Blomert (2008) established an

MMN for letter–speech-sound pairs in typically developing adults, wherein the MMN was enhanced, or larger, for crossmodal letter-sound than sound-alone deviants. Similar to the fMRI findings, this crossmodal enhancement indicates an early, automatic matching of sounds with their letters, and thus an early, automatic influence of one on the processing of the other.

Individuals with dyslexia, though, do not show such automatic gain. First, both typically developing and dyslexic children demonstrated adult-like MMNs for sound-alone deviants, indicating normal auditory or phonemic discrimination. However, only typically developing children demonstrated an enhanced MMN for audiovisual as compared to auditory-alone deviants, though at a longer stimulus-onset asynchrony than that for adults: children with dyslexia did not show any modality difference at either tested stimulus-onset asynchrony (Froyen et al., 2011). Similarly, adults with dyslexia continued to show no effect of non-matching visual NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 21 material on the MMN for auditory syllables, unlike control participants (Mittag, Thesleff,

Laasonen, & Kujala, 2013). These group differences are critical in establishing that individuals with dyslexia do not demonstrate any early influence of the presence or absence of a grapheme cue on phoneme processing, indicating that they do not have automatic access to integrated representations even after four or more years of reading instruction (Froyen et al., 2011).

Later evoked potentials may also reflect impaired crossmodal processing in individuals with dyslexia. The N300 may be related to access of phonological information for orthographic stimuli, as in the internal modality translations needed for successful reading. Children with dyslexia demonstrated decreased amplitude and a longer latency for the N300 than did typically developing children when evaluating phonology-orthography matches, but no group differences were found for orthography-orthography match evaluation (Hasko, Bruder, Bartling, & Schulte-

Korne, 2012). First graders with dyslexia demonstrated a delayed and reduced N2b for non- linguistic sound-symbol matches relative to typically developing children (Widmann, Schroeger,

Tervaniemi, Pakarinen, & Kujala, 2012), potentially indicating that individuals with dyslexia had later and less reliable processing of crossmodal congruency, perhaps because they did not create integrated representations for these items.

Crossmodal integration in audiovisual speech processing

Individuals with dyslexia have been demonstrated to show impaired speech perception and processing, in both auditory-only and audiovisual paradigms (see Chapter 8 in this volume), though this may characterize only a subset of dyslexic children at an early stage (Harm &

Seidenberg, 1999, p. 515; Manis et al., 1997). Poorer readers may have weaker categorical perception of audiovisual syllables than control participants (de Gelder & Vroomen, 1998;

Veuillet, Magnan, Ecalle, Thai-Van, & Collet, 2007) and may show reduced audiovisual NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 22 integration, evidenced by a lessened McGurk effect (a perceptual illusion wherein conflicting visual and auditory speech information produce a perceived sound different from either; see

McGurk & MacDonald, 1967) and fewer reported “fusion” or combination sounds for incongruent audiovisual stimuli (Hayes, Tiippana, Nicol, Sams, & Kraus, 2003). Neurally,

Rüsseler Ye, Gerth, Szycik, and Münte (2017) found that individuals with dyslexia demonstrated decreased activation relative to fluent readers in several brain regions (which included the left superior temporal sulcus, inferior frontal gyrus, and fusiform gyrus) for auditory (disyllabic word)-visual (mouth movements) speech, and abnormal activations for incongruous as compared to congruous presentations. Dyslexic readers may thus be less able to recruit audio-visual integration areas during multisensory processing, resulting in general audiovisual behavioral difficulties. However, recent work from Megnin-Viggars and Goswami (2013) showed that individuals with dyslexia were able to incorporate visual temporal information into vocoded speech recognition, with performances similar to those of control participants. Individuals with dyslexia may thus especially draw on motor-articulatory strategies for audiovisual speech processing, as they demonstrate relatively increased activity in these areas as compared to fluent readers, potentially to compensate for unimodal perception or crossmodal integration deficits

(Pekkola et al., 2006). However, audiovisual integration processing may be quite different in speech than is needed for reading, as auditory information is both necessary and sufficient for speech comprehension, and the two modalities’ integration streams share a natural instead of an arbitrary association.

Under the “temporal sampling framework” proposed by Goswami (see 2011; and Chapter

8), individuals with dyslexia demonstrate impaired processing of temporal modulations in the speech and syllable band, including decreased amplitude modulation sensitivity. These deficits NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 23 could result in the manifested difficulties in phonetic and syllabic speech processing, as well as audiovisual integration. Individuals with dyslexia may be impaired on the phonological discriminations needed for phoneme–grapheme mappings, thus impacting their ability to create useful integrated representations. Successful audiovisual association may require abnormal temporal coincidence parameters beyond what is needed for typically developing children.

Crossmodal interventions in dyslexia

Given how critical crossmodal integration is for successful reading, and considering the preponderance of evidence that individuals with dyslexia demonstrate a pronounced and consistent deficit in such integration, it is not surprising that intervention and training programs that focus on integrative strategies seem to be particularly effective. Connectionist models demonstrated that explicit letter-sound training was necessary for effective intervention once any print exposure had occurred for both phonological and hidden unit damage (Harm et al., 2003).

In experimental work, meta-analyses conducted by Bus and van IJzendoorn (1999) and Ehri,

Nunes, Stahl, and Willows (2001) confirmed that interventions including letter practice, whether alone or in combination with other reading-related skills, were more effective than programs that only focused on phonological awareness. As such, building phonology skills in isolation does not provide sufficient reading gain to be considered complete treatment. Bus and van IJzendorn suggest that children with reading difficulties may “need additional support for linking phonological processes to reading” (p. 407). Letter–sound correspondences must thus be practiced to build, reinforce, and fluently implement integrated crossmodal representations.

Several groups have tested integration-based interventions for children with dyslexia, including phoneme–grapheme mapping training (Aylward et al., 2003), alphabetic principle training (B. A. Shaywitz et al., 2004), use of the PhonoGraphix program (Simos et al., 2002), and NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 24 use of a computerized audiovisual syllable recognition training program (Magnan & Ecalle,

2006). These studies have demonstrated significant gains for their participants and normalization of reading-related activity (for more details on brain imaging studies on reading intervention please see Chapter 22). Audiovisual speech training for categorical perception of voice onset time, in which children were asked to match auditory syllables to the corresponding grapheme, also resulted in significant improvements for reading scores (Veuillet et al., 2007). Kujala et al.

(2001) have also shown that even nonlinguistic audiovisual training may lead to improvements in word reading and an increased MMN for deviant tones, pointing to a general audiovisual deficit that is addressed by these trainings. Thus, while not explicitly contrasted with auditory- or phonology-only programs, these works demonstrate the importance of explicit integration training for strongly connecting print with phonology, leading to reading recovery.

Conclusions and future directions for research

Crossmodal processing is critical for successful reading and may have a causal role in developmental dyslexia. This skill is subserved by specific neural systems, most prominently the posterior STS and the AF, which may be neurally indexed by functional activity and structural coherence. Individuals with dyslexia demonstrate decreased responsivity specifically for crossmodal conditions, including reading itself. Interventions that include a strong crossmodal component such as phonics or alphabetic principle training have been repeatedly demonstrated to produce the greatest gains in reading. The evidence for a crossmodal integration deficit as a causal factor in the reading difficulties seen in developmental dyslexia is thus substantial, with behavioral and neuroimaging findings converging on a consistent reduction in crossmodal response accuracy, activity, and sensitivity. Even so, the current body of literature has left open NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 25 several questions that may be critically informative in determining the level of causality and type of crossmodal information processing deficit in dyslexia.

First, it is important to determine whether the crossmodal processing deficits seen in dyslexia stem from impaired external association acquisition or the later internal application of this information. Whether dyslexia is best characterized as deficiencies at one or both stages is still an open question: participants could be especially slow to reach criterion on a training task where both modalities’ information is presented but perform well once these associations have been learned, or could show deficits in only fluent or timed implementation of this information.

Current evidence could support either stage. Froyen et al.’s (2011) data demonstrating comparable accuracies but failure for response times to decrease with age for letter–sound associations in children with dyslexia, and similar work indicating that dyslexics may show decreased post-acquisition crossmodal influence (Landerl et al., 1996; McNorgan, Randazzo-

Wagner, et al., 2013; Meyler & Breznitz, 2003) and phoneme–grapheme congruency sensitivity

(Blau et al., 2010), could point to an implementation deficit. In contrast, behavioral work has shown that even this simple association formation may be difficult, as in crossmodal paired- associate learning (Hulme et al., 2007; Messbauer & de Jong, 2003), and thus is predictive of reading impairment. These two stages, though, have not yet been specifically compared. If the fluent usage of stored mappings is difficult, retrieval practice may be beneficial, but if the association itself is not as well learned, increased input training could be of more utility.

Differences between a potential inherent deficit in the acquisition or use of multisensory associations and a lack of prior experience or practice with the internal modality translations required in reading should be explored. Growth-curve analyses could demonstrate whether there are differences in learning rates between individuals with dyslexia and control peers after typical NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 26 exposure and instruction. Shallower slopes would indicate continued difficulty with multisensory information usage, resulting in an increasing gap between typical and atypical performance, suggesting a qualitative difference in multisensory processing and an altered learning mechanism. In contrast, individuals with dyslexia may show similar slopes but may asymptote earlier, thus never reaching typical performance levels, potentially reflecting use of typical mechanisms; steeper slopes could result from processing normalization after adequate experience given intact underlying mechanisms (see Francis, Schatschneider, & Carlson, 2000, for further discussion). Longitudinal fMRI designs may be particularly well suited to growth curve analyses, as different reading-related or integrative areas or structures may demonstrate different slopes or patterns over time. Since multiple time points are required for modeling, such reports are currently quite infrequent in developmental reading work (see Ben-Shachar, Dougherty,

Deutsch, & Wandell, 2011) and even rarer in studies of dyslexia (see Francis, Shaywitz,

Stuebing, Shaywitz, & Fletcher, 1996; S. E. Shaywitz et al., 1999, for examples with behavioral data), but this approach may be useful for parsing out which areas, and thus potentially which mechanisms, show deficit versus lag or delay responses.

The direction of crossmodal impairment also needs direct investigation. Reading requires an internal visual-to-auditory translation but is learned by associating familiar auditory forms with new visual information. Snowling’s (1980) and Harrar et al.’s (2014) finding that visual-to- auditory processing is particularly hard for individuals with dyslexia could mean that the fluent use, but not necessarily formation, of crossmodal associations is problematic specifically for this direction. Effects of the order of crossmodal information presentation have not yet been systematically examined, especially in neuroimaging work. Several studies have used simultaneous bimodal presentations (Blau et al., 2009; van Atteveldt et al., 2004), but those with NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 27 sequential paradigms generally test only one direction. A directionally specific deficit would have significant implications for intervention work. First, such a difference would imply that auditory processing itself, and the internal translation from auditory to visual, is intact, but that starting with visual information is the major issue. If this were the case, auditory-focused training could be supplemented with deficit-focused programs. Second, relative strengths in auditory-to- visual translation could be used as a starting point for intervention. If auditory-to-visual is easier, perhaps because it begins with the more familiar information, additional training with the graphemes themselves could serve to increase their familiarity and thus potentially improve automatic association retrieval.

The relative influences of top-down versus bottom-up attention on crossmodal learning in individuals with dyslexia have not been specifically compared. Reading may especially require externally directed top-down attention for letter–sound relationships to be recognized and remembered (Talsma et al., 2010), perhaps indexed as increased IFG activity for unfamiliar crossmodal phoneme–grapheme pairings (Calvert, 2001; Hein et al., 2007). Harrar et al. (2014) proposed that the difficulties in visual-to-auditory crossmodal matching were due to sluggish attention shifting particularly in this direction. The impact of domain-general attention training, or of externally guided attention to the relevant features needed for discrimination, could be examined.

The role of semantics in crossmodal processing has been largely ignored in the current neuroimaging literature but could be an interesting alternative route for intervention. As proposed by the connectionist models, semantics may act as a compensatory mechanism for crossmodal processing difficulties, where intact semantic knowledge may be used to bolster reading, especially for low-frequency and exception words (Plaut & Booth, 2000). Thus, focused NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 28 semantics training for low-frequency or exception words could improve the automaticity of access to their integrated representations; such training may compound the improvements gained through crossmodal phonics-based interventions or could be similarly effective for certain participants.

Finally, the differential impact of deficits in unimodal domains, such as phonological, orthographic, and semantic processing, on multisensory integration could be examined.

Connectionist models have established how phonological or hidden unit module impairments could affect reading, but specific comparisons between unimodal damages should be assessed to determine how each may impact reading-related integration. Such separable cases could relate to the idea that dyslexia has come to represent a broad spectrum of reading impairments instead of one uniform type. If this were to be the case, some individuals with dyslexia may show an integration-based deficit, while for others the root impairment is phonological or orthographic. In each case, unimodal difficulty may still affect reading and especially crossmodal processing, but the pattern and underlying mechanism of the deficit may differ. Future research may aim to look more at individual differences in these domains to determine the heterogeneity of dyslexia, and thus the variety of underlying differences, including how each deficit impacts crossmodal integration. NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 29 References

Alais, D., Newell, F. N., & Mamassian, P. (2010). Multisensory processing in review: From

physiology to behaviour. Seeing and Perceiving, 23(1), 3-38.

doi:10.1163/187847510X488603

Aravena, S., Snellings, P., Tijms, J., & van der Molen, M. W. (2013). A lab-controlled

simulation of a letter-speech sound binding deficit in dyslexia. Journal of Experimental

Child Psychology, 115(4), 691-707. doi:10.1016/j.jecp.2013.03.009

Aylward, E. H., Richards, T. L., Berninger, V. W., Nagy, W. E., Field, K. M., Grimme, A.

C., . . . Cramer, S. C. (2003). Instructional treatment associated with changes in brain

activation in children with dyslexia. Neurology, 61(2), 212-219.

doi:10.1212/01.WNL.0000068363.05974.64

Beauchamp, M. S., Lee, K. E., Argall, B. D., & Martin, A. (2004). Integration of auditory and

visual information about objects in superior temporal sulcus. Neuron, 41(5), 809-823.

doi:10.1016/S0896-6273(04)00070-4

Beaulieu, C., Plewes, C., Paulson, L. A., Roy, D., Snook, L., Concha, L., & Phillips, L. (2005).

Imaging brain connectivity in children with diverse reading ability. Neuroimage, 25(4),

1266-1271. doi:10.1016/J.Neuroimage.2004.12.053

Ben-Shachar, M., Dougherty, R. F., Deutsch, G. K., & Wandell, B. A. (2011). The development

of cortical sensitivity to visual word forms. Journal of Cognitive Neuroscience, 23(9),

2387-2399. doi:10.1162/jocn.2011.21615

Blau, V., Reithler, J., van Atteveldt, N., Seitz, J., Gerretsen, P., Goebel, R., & Blomert, L.

(2010). Deviant processing of letters and speech sounds as proximate cause of reading NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 30 failure: A functional magnetic resonance imaging study of dyslexic children. Brain, 133,

868-879. doi:10.1093/Brain/Awp308

Blau, V., van Atteveldt, N., Ekkebus, M., Goebel, R., & Blomert, L. (2009). Reduced neural

integration of letters and speech sounds links phonological and reading deficits in adult

dyslexia. Current Biology, 19(6), 503-508. doi:10.1016/J.Cub.2009.01.065

Blau, V., van Atteveldt, N., Formisano, E., Goebel, R., & Blomert, L. (2008). Task-irrelevant

visual letters interact with the processing of speech sounds in heteromodal and unimodal

cortex. European Journal of Neuroscience, 28(3), 500-509. doi:10.1111/j.1460-

9568.2008.06350.x

Blomert, L., & Willems, G. (2010). Is there a causal link from a phonological awareness deficit

to reading failure in children at familial risk for dyslexia? Dyslexia, 16(4), 300-317.

doi:10.1002/Dys.405

Boets, B., Wouters, J., van Wieringen, A., De Smedt, B., & Ghesquiere, P. (2008). Modelling

relations between sensory processing, speech perception, orthographic and phonological

ability, and literacy achievement. Brain and Language, 106(1), 29-40.

doi:10.1016/j.bandl.2007.12.004

Bolger, D. J., Minas, J., Burman, D. D., & Booth, J. R. (2008). Differential effects of

orthographic and phonological consistency in cortex for children with and without

reading impairment. Neuropsychologia, 46(14), 3210-3224.

doi:10.1016/J.Neuropsychologia.2008.07.024

Brem, S., Bach, S., Kucian, K., Guttorm, T. K., Martin, E., Lyytinen, H., . . . Richardson, U.

(2010). Brain sensitivity to print emerges when children learn letter-speech sound NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 31 correspondences. Proceedings of the National Academy of Sciences of the United States

of America, 107(17), 7939-7944. doi:10.1073/pnas.0904402107

Bus, A. G., & van IJzendoorn, M. H. (1999). Phonological awareness and early reading: A meta-

analysis of experimental training studies. Journal of Educational Psychology, 91(3), 403-

414. doi:10.1037/0022-0663.91.3.403

Calvert, G. A. (2001). Crossmodal processing in the human brain: Insights from functional

neuroimaging studies. Cerebral Cortex, 11(12), 1110-1123.

doi:10.1093/Cercor/11.12.1110

Cao, F., Bitan, T., & Booth, J. R. (2008). Effective brain connectivity in children with reading

difficulties during phonological processing. Brain and Language, 107(2), 91-101.

doi:10.1016/J.Bandl.2007.12.009

Cao, F., Bitan, T., Chou, T. L., Burman, D. D., & Booth, J. R. (2006). Deficient orthographic and

phonological representations in children with dyslexia revealed by brain activation

patterns. Journal of Child Psychology and Psychiatry, 47(10), 1041-1050.

doi:10.1111/J.1469-7610.2006.01684.X

Catani, M., Jones, D. K., & Ffytche, D. H. (2005). Perisylvian language networks of the human

brain. Annals of Neurology, 57(1), 8-16. doi:10.1002/Ana.20319

Cuppini, C., Magosso, E., & Ursino, M. (2011). Organization, maturation, and plasticity of

multisensory integration: Insights from computational modeling studies. Frontiers in

Psychology, 2. doi:10.3389/Fpsyg.2011.00077 de Gelder, B., & Vroomen, J. (1998). Impaired speech perception in poor readers: Evidence from

and speech reading. Brain and Language, 64(3), 269-281.

doi:10.1006/brln.1998.1973 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 32 Dehaene, S., Le Clec'H, G., Poline, J. B., Le Bihan, D., & Cohen, L. (2002). The visual word

form area: A prelexical representation of visual words in the fusiform gyrus.

Neuroreport, 13(3), 321-325. doi:10.1097/00001756-200203040-00015

Desroches, A. S., Cone, N. E., Bolger, D. J., Bitan, T., Burman, D. D., & Booth, J. R. (2010).

Children with reading difficulties show differences in brain regions associated with

orthographic processing during spoken language processing. Brain Research, 1356, 73-

84. doi:10.1016/J.Brainres.2010.07.097

Ehri, L. C., Nunes, S. R., Stahl, S. A., & Willows, D. M. (2001). Systematic phonics instruction

helps students learn to read: Evidence from the national reading panel's meta-analysis.

Review of Educational Research, 71(3), 393-447. doi:10.3102/00346543071003393

Ellefson, M. R., Treiman, R., & Kessler, B. (2009). Learning to label letters by sounds or names:

A comparison of England and the United States. Journal of Experimental Child

Psychology, 102(3), 323-341. doi:10.1016/J.Jecp.2008.05.008

Evans, M. A., Bell, M., Shaw, D., Moretti, S., & Page, J. (2006). Letter names, letter sounds and

phonological awareness: An examination of kindergarten children across letters and of

letters across children. Reading and Writing, 19(9), 959-989. doi:10.1007/S11145-006-

9026-X

Foulin, J. N. (2005). Why is letter-name knowledge such a good predictor of learning to read?

Reading and Writing, 18(2), 129-155. doi:10.1007/s11145-004-5892-2

Francis, D. J., Schatschneider, C., & Carlson, C. D. (2000). Introduction to individual growth

curve analysis. In D. Drotar (Ed.), Handbook of research in pediatric and clinical child

psychology (pp. 51-73). Location: New York. Springer. NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 33 Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M. (1996).

Developmental lag versus deficit models of reading disability: A longitudinal, individual

growth curves analysis. Journal of Educational Psychology, 88(1), 3-17.

doi:10.1037//0022-0663.88.1.3

Froyen, D. J. W., Bonte, M. L., van Atteveldt, N., & Blomert, L. (2009). The long road to

automation: Neurocognitive development of letter-speech sound processing. Journal of

Cognitive Neuroscience, 21(3), 567-580. doi:10.1162/jocn.2009.21061

Froyen, D., Van Atteveldt, N., Bonte, M., & Blomert, L. (2008). Cross-modal enhancement of

the MMN to speech-sounds indicates early and automatic integration of letters and

speech-sounds. Neuroscience Letters, 430(1), 23-28. doi:10.1016/j.neulet.2007.10.014

Froyen, D., Willems, G., & Blomert, L. (2011). Evidence for a specific cross-modal association

deficit in dyslexia: An electrophysiological study of letter speech sound processing.

Developmental Science, 14(4), 635-648. doi:10.1111/j.1467-7687.2010.01007.x

Gold, B. T., Powell, D. K., Xuan, L., Jiang, Y., & Hardy, P. A. (2007). Speed of lexical decision

correlates with diffusion anisotropy in left parietal and frontal white matter: Evidence

from diffusion tensor imaging. Neuropsychologia, 45(11), 2439-2446.

doi:10.1016/J.Neuropsychologia.2007.04.011

Goswami, U. (2011). A temporal sampling framework for developmental dyslexia. Trends in

Cognitive Sciences, 15(1), 3-10. doi:10.1016/j.tics.2010.10.001

Gullick, M. M., & Booth, J. (2014). Individual differences in crossmodal brain activity predict

arcuate fasciculus connectivity in developing readers. Journal of Cognitive Neuroscience,

26(7), 1331-1346. doi:10.1162/jocn_a_00581 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 34 Gullick, M. M., & Booth, J. R. (2015). The direct segment of the arcuate fasciculus is predictive

of longitudinal reading change. Developmental Cognitive Neuroscience. Advance online

publication. doi:10.1016/j.dcn.2015.05.002

Harm, M. W., McCandliss, B. D., & Seidenberg, M. S. (2003). Modeling the successes and

failures of interventions for disabled readers. Scientific Studies of Reading, 7(2), 155-182.

doi:10.1207/S1532799xssr0702_3

Harm, M. W., & Seidenberg, M. S. (1999). Phonology, reading acquisition, and dyslexia:

Insights from connectionist models. Psychological Review, 106(3), 491-528.

doi:10.1037/0033-295X.106.3.491

Harrar, V., Tammam, J., Perez-Bellido, A., Pitt, A., Stein, J., & Spence, C. (2014). Multisensory

integration and attention in developmental dyslexia. Current Biology, 24(5), 531-535.

doi:10.1016/j.cub.2014.01.029

Hasko, S., Bruder, J., Bartling, J., & Schulte-Korne, G. (2012). N300 indexes deficient

integration of orthographic and phonological representations in children with dyslexia.

Neuropsychologia, 50(5), 640-654. doi:10.1016/j.neuropsychologia.2012.01.001

Hayes, E. A., Tiippana, K., Nicol, T. G., Sams, M., & Kraus, N. (2003). Integration of heard and

seen speech: A factor in learning disabilities in children. Neuroscience Letters, 351(1),

46-50. doi:10.1016/S0304-3940(03)00971-6

Hein, G., Doehrmann, O., Muller, N. G., Kaiser, J., Muckli, L., & Naumer, M. J. (2007). Object

familiarity and semantic congruency modulate responses in cortical audiovisual

integration areas. Journal of Neuroscience, 27(30), 7881-7887.

doi:10.1523/Jneurosci.1740-07.2007 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 35 Hulme, C., Goetz, K., Gooch, D., Adams, J., & Snowling, M. J. (2007). Paired-associate

learning, phoneme awareness, and learning to read. Journal of Experimental Child

Psychology, 96(2), 150-166. doi:10.1016/j.jecp.2006.09.002

Kast, M., Bezzola, L., Jancke, L., & Meyer, M. (2011). Multi- and unisensory decoding of words

and nonwords result in differential brain responses in dyslexic and nondyslexic adults.

Brain and Language, 119(3), 136-148. doi:10.1016/j.bandl.2011.04.002

Klingberg, T., Hedehus, M., Temple, E., Salz, T., Gabrieli, J. D. E., Moseley, M. E., & Poldrack,

R. A. (2000). Microstructure of temporo-parietal white matter as a basis for reading

ability: Evidence from diffusion tensor magnetic resonance imaging. Neuron, 25(2), 493-

500. doi:10.1016/S0896-6273(00)80911-3

Kujala, T., Karma, K., Ceponiene, R., Belitz, S., Turkkila, P., Tervaniemi, M., & Naatanen, R.

(2001). Plastic neural changes and reading improvement caused by audiovisual training

in reading-impaired children. Proceedings of the National Academy of Sciences of the

United States of America, 98(18), 10509-10514. doi:10.1073/pnas.181589198

Laasonen, M., Tomma-Halme, J., Lahti-Nuuttila, P., Service, E., & Virsu, V. (2000). Rate of

information segregation in developmentally dyslexic children. Brain and Language,

75(1), 66-81. doi:10.1006/brln.2000.2326

Landerl, K., Frith, U., & Wimmer, H. (1996). Intrusion of orthographic knowledge on phoneme

awareness: Strong in normal readers, weak in dyslexic readers. Applied

Psycholinguistics, 17(1), 1-14. doi:10.1017/S0142716400009437

Luck, S. J., & Kappenman, E. S. (Eds.). (2011). The Oxford handbook of event-related potential

components (1st ed.). New York, NY: Oxford University Press. NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 36 Magnan, A., & Ecalle, J. (2006). Audio-visual training in children with reading disabilities.

Computers & Education, 46(4), 407-425. doi:10.1016/j.compedu.2004.08.008

Manis, F. R., McBrideChang, C., Seidenberg, M. S., Keating, P., Doi, L. M., Munson, B., &

Petersen, A. (1997). Are speech perception deficits associated with developmental

dyslexia? Journal of Experimental Child Psychology, 66(2), 211-235.

doi:10.1006/jecp.1997.2383

Martin, J. G., Meredith, M. A., & Ahmad, K. (2009). Modeling multisensory enhancement with

self-organizing maps. Frontiers in Computational Neuroscience, 3, 8.

doi:10.3389/Neuro.10.008.2009

McCandliss, B. D., Cohen, L., & Dehaene, S. (2003). The visual word form area: Expertise for

reading in the fusiform gyrus. Trends in Cognitive Sciences, 7(7), 293-299.

doi:10.1016/S1364-6613(03)00134-7

McGurk, H. & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588): 746–

748. doi:10.1038/264746a0

McNorgan, C., Awati, N., Desroches, A. S., & Booth, J. R. (2014). Multimodal lexical

processing in auditory cortex is literacy skill dependent. Cerebral Cortex, 24(9), 2464-

2475. doi:10.1093/cercor/bht100

McNorgan, C., & Booth, J. R. (2015). Skill dependent audiovisual integration in the fusiform

induces repetition suppression. Brain and Language, 141, 110-123.

doi:10.1016/j.bandl.2014.12.002

McNorgan, C., Randazzo-Wagner, M., & Booth, J. R. (2013). Cross-modal integration in the

brain is related to phonological awareness only in typical readers, not in those with NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 37 reading difficulty. Frontiers in Human Neuroscience, 7, 388.

doi:10.3389/Fnhum.2013.00388

Megnin-Viggars, O., & Goswami, U. (2013). Audiovisual perception of noise vocoded speech in

dyslexic and non-dyslexic adults: The role of low-frequency visual modulations. Brain

and Language, 124(2), 165-173. doi:10.1016/j.bandl.2012.12.002

Messbauer, V. C. S., & de Jong, P. F. (2003). Word, nonword, and visual paired associate

learning in Dutch dyslexic children. Journal of Experimental Child Psychology, 84(2),

77-96. doi:10.1016/S0022-0965(02)00179-0

Meyler, A., & Breznitz, Z. (2003). Processing of phonological, orthographic and cross-modal

word representations among adult dyslexic and normal readers. Reading and Writing,

16(8), 785-803. doi:10.1023/A:1027336027211

Mittag, M., Thesleff, P., Laasonen, M., & Kujala, T. (2013). The neurophysiological basis of the

integration of written and heard syllables in dyslexic adults. Clinical Neurophysiology,

124(2), 315-326. doi:10.1016/j.clinph.2012.08.003

Papadopoulos, T. C., Spanoudis, G. C., & Georgiou, G. K. (2016) How is RAN related to

reading fluency? A comprehensive examination of the prominent theoretical accounts.

Frontiers in Psychology, 7, 1217. doi:10.3389/fpsyg.2016.01217

Pekkola, J., Laasonen, M., Ojanen, V., Autti, T., Jaaskelainen, I. P., Kujala, T., & Sams, M.

(2006). Perception of matching and conflicting audiovisual speech in dyslexic and fluent

readers: An fMRI study at 3 T. Neuroimage, 29(3), 797-807.

doi:10.1016/j.neuroimage.2005.09.069 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 38 Plaut, D. C., & Booth, J. R. (2000). Individual and developmental differences in semantic

priming: Empirical and computational support for a single-mechanism account of lexical

processing. Psychological Review, 107(4), 786-823. doi:10.1037//0033-295x.107.4.786

Price, C. J., & Devlin, J. T. (2003). The myth of the visual word form area. Neuroimage, 19(3),

473-481. doi:10.1016/S1053-8119(03)00084-3

Richlan, F. (2012). Developmental dyslexia: Dysfunction of a left hemisphere reading network.

Frontiers in Human Neuroscience, 6, 120. doi:10.3389/Fnhum.2012.00120

Richlan, F., Kronbichler, M., & Wimmer, H. (2009). Functional abnormalities in the dyslexic

brain: A quantitative meta-analysis of neuroimaging studies. Human Brain Mapping,

30(10), 3299-3308. doi:10.1002/Hbm.20752

Rimrodt, S. L., Peterson, D. J., Denckla, M. B., Kaufmann, W. E., & Cutting, L. E. (2010).

White matter microstructural differences linked to left perisylvian language network in

children with dyslexia. Cortex, 46(6), 739-749. doi:10.1016/J.Cortex.2009.07.008

Rüsseler, J., Ye, Z., Gerth, I., Szycik, G. R., & Münte, T. F. (2017) Audio-visual speech

perception in adult readers with dyslexia: an fMRI study. Brain Imaging and Behavior.

doi:10.1007/s11682-017-9694-y

Seilheimer, R. L., Rosenberg, A., & Angelaki, D. E. (2014). Models and processes of

multisensory cue combination. Current Opinion in Neurobiology, 25, 38-46. doi:10.1016/

j.conb.2013.11.008

Sela, I. (2014). Visual and auditory synchronization deficits among dyslexic readers as compared

to non-impaired readers: A cross-correlation algorithm analysis. Frontiers in Human

Neuroscience, 8, 1-12. doi:10.3389/fnhum.2014.00364 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 39 Shaywitz, B. A., Shaywitz, S. E., Blachman, B. A., Pugh, K. R., Fulbright, R. K., Skudlarski,

P., . . . Gore, J. C. (2004). Development of left occipitotemporal systems for skilled

reading in children after a phonologically-based intervention. Biological Psychiatry,

55(9), 926-933. doi:10.1016/J.Biopsych.2003.12.019

Shaywitz, S. E., Fletcher, J. M., Holahan, J. M., Shneider, A. E., Marchione, K. E., Stuebing, K.

K., . . . Shaywitz, B. A. (1999). Persistence of dyslexia: The Connecticut Longitudinal

Study at adolescence. Pediatrics, 104(6), 1351-1359. doi:10.1542/peds.104.6.1351

Simos, P. G., Fletcher, J. M., Bergman, E., Breier, J. I., Foorman, B. R., Castillo, E. M., . . .

Papanicolaou, A. C. (2002). Dyslexia-specific brain activation profile becomes normal

following successful remedial training. Neurology, 58(8), 1203-1213.

doi:10.1212/WNL.58.8.1203

Swanson, H. L., Trainin, G., Necoechea, D. M., & Hammill, D. D. (2003) Rapid naming,

phonological awareness, and reading: a meta-analysis of the correlational evidence.

Review of Education Research, 73(4), 407-440. doi:10.3102/00346543073004407

Snowling, M. J. (1980). The development of grapheme–phoneme correspondence in normal and

dyslexic readers. Journal of Experimental Child Psychology, 29(2), 294-305.

doi:10.1016/0022-0965(80)90021-1

Stevenson, R. A., Ghose, D., Fister, J. K., Sarko, D. K., Altieri, N. A., Nidiffer, A. R., . . .

Wallace, M. T. (2014). Identifying and quantifying multisensory integration: A tutorial

review. Brain Topography, 27(6),707-30. doi:10.1007/s10548-014-0365-7

Talsma, D., Senkowski, D., Soto-Faraco, S., & Woldorff, M. G. (2010). The multifaceted

interplay between attention and multisensory integration. Trends in Cognitive Sciences,

14(9), 400-410. doi:10.1016/j.tics.2010.06.008 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 40 van Atteveldt, N. M., Formisano, E., Goebel, R., & Blomert, L. (2004). Integration of letters and

speech sounds in the human brain. Neuron, 43(2), 271-282.

doi:10.1016/j.neuron.2004.06.025 van Atteveldt, N. M., Formisano, E., Blomert, L., & Goebel, R. (2007). The effect of temporal

asynchrony on the multisensory integration of letters and speech sounds. Cerebral

Cortex, 17(4), 962-974. doi:10.1093/cercor/bhl007 van Atteveldt, N. M., Formisano, E., Goebel, R., & Blomert, L. (2007). Top-down task effects

overrule automatic multisensory responses to letter–sound pairs in auditory association

cortex. Neuroimage, 36(4), 1345-1360. doi:10.1016/j.neuroimage.2007.03.065

Vandermosten, M., Boets, B., Poelmans, H., Sunaert, S., Wouters, J., & Ghesquiere, P. (2012). A

tractography study in dyslexia: Neuroanatomic correlates of orthographic, phonological

and speech processing. Brain, 135(3), 935-948. doi:10.1093/Brain/Awr363

Vandermosten, M., Boets, B., Wouters, J., & Ghesquiere, P. (2012). A qualitative and

quantitative review of diffusion tensor imaging studies in reading and dyslexia.

Neuroscience and Biobehavioral Reviews, 36(6), 1532-1552.

doi:10.1016/J.Neubiorev.2012.04.002

Veuillet, E., Magnan, A., Ecalle, J., Thai-Van, H., & Collet, L. (2007). Auditory processing

disorder in children with reading disabilities: Effect of audiovisual training. Brain,

130(11), 2915-2928. doi:10.1093/Brain/Awm235

Walsh, D. J., Price, G. G., & Gillingham, M. G. (1988). The critical but transitory importance of

letter naming. Reading Research Quarterly, 23(1), 108-122. doi:10.2307/747907 NEUROCOGNITIVE CROSSMODAL DEFICIT IN DYSLEXIA 41 Werner, S., & Noppeney, U. (2010). Distinct functional contributions of primary sensory and

association areas to audiovisual integration in object categorization. Journal of

Neuroscience, 30(7), 2662-2675. doi:10.1523/Jneurosci.5091-09.2010

Widmann, A., Schroeger, E., Tervaniemi, M., Pakarinen, S., & Kujala, T. (2012). Mapping

symbols to sounds: Electrophysiological correlates of the impaired reading process in

dyslexia. Frontiers in Psychology, 3, 60. doi:10.3389/fpsyg.2012.00060

Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and

superior temporal cortex in multimodal integration of action and language. Neuroimage,

47(4), 1992-2004. doi:10.1016/j.neuroimage.2009.05.066

Woollams, A. M., Ralph, M. A. L., Plaut, D. C., & Patterson, K. (2007). SD-squared: On the

association between semantic dementia and surface dyslexia. Psychological Review,

114(2), 316-339. doi:10.1037/0033-295x.114.2.316

Yeatman, J. D., Dougherty, R. F., Ben-Shachar, M., & Wandell, B. A. (2012). Development of

white matter and reading skills. Proceedings of the National Academy of Sciences of the

United States of America, 109(44), E3045-E3053. doi:10/1073/pnas.1206792109