<<

The Neurobiology of Reading Differs for Deaf and Hearing Adults

Karen Emmorey

Please cite as:

Emmorey, K. (2020). The neurobiology of reading differs for deaf and hearing adults. In M. Marschark and H. Knoors (Eds). Oxford Handbook of Deaf Studies in Learning and Cognition, pp. 347–359, Oxford University Press.

Running head: The neurobiology of reading

Karen Emmorey Laboratory for and Cognitive 6495 Alvarado Road, Suite 200 San Diego, CA 92120 USA [email protected]

Acknowledgements This work was supported by a grant from the National Institutes of Health (DC014246) and grants from the National Science Foundation (BCS-1651372; BCS-1756403).

The neurobiology of reading 2

Abstract

Recent neuroimaging and electrophysiological evidence reveal how the reading system successfully adapts when phonological codes are relatively coarse-grained due to reduced auditory input during development. New evidence suggests that the optimal end-state for the reading system may differ for deaf versus hearing adults and indicates that certain neural patterns that are maladaptive for hearing readers may be beneficial for deaf readers. This chapter focuses on deaf adults who are signers and have achieved reading success. Although the left-hemisphere dominant reading circuit is largely similar, skilled deaf readers exhibit a more bilateral neural response to written words and sentences compared to their hearing peers, as measured by event- related potentials and functional magnetic resonance imaging. Skilled deaf readers may also rely more on neural regions involved in semantic processing compared to hearing readers. Overall, emerging evidence indicates that the neural markers for reading skill may differ for deaf and hearing adults.

Keywords: deaf, reading, visual word recognition, sentence reading, fMRI, ERP

The neurobiology of reading 3

Reading is a complex visual and linguistic process that differs for deaf and hearing adults because of their distinct sensory and language experiences. Changes in visual processing that stem from a lack of auditory input during development and imprecise phonological representations that result from reduced access to auditory speech can both alter the nature of the reading process. Although a great deal is known about the neural circuitry that supports reading in typical hearing adults (e.g., Dehaene, 2009; Pugh et al., 2001), much less is known about the neural regions that support skilled reading in deaf individuals. This chapter reviews our current understanding of the neurocognitive underpinnings of reading skill in deaf adults who grew up with a (i.e., exposed to a sign language in early childhood). The review focuses primarily on deaf adult signers because a) they are less likely to have experienced language deprivation during childhood (e.g., Humphries et al., 2012; Glickman & Hall, 2018), which may have distinct effects on reading acquisition and b) they are less likely to access phonological codes when reading compared to deaf adults who acquired only a spoken language (e.g.,

Hirshorn et al., 2015; Koo, Kelly, LaSasso, & Eden, 2008). Thus, deaf signers may be more likely to exhibit neural plasticity within the reading system and may achieve reading success through alternative pathways compared to hearing speakers.

All neurobiological theories of reading must specify the neurocognitive processes involved in comprehending the elemental units of written language: visually encountered words.

Thus, this review first examines how the recognizes individual words and the factors that influence the brain’s response to printed words in deaf compared to hearing readers. The second

(shorter) section of this review describes the few neuroimaging and neurophysiological studies that have investigated the processes involved in sentence-level reading in deaf compared to

The neurobiology of reading 4 hearing adults. The final section concludes with a summary and suggestions for future research directions.

<1> Visual Word Recognition

Word reading in the hearing population is supported by a distributed neural circuitry associated with specific component processes (i.e., phonology, orthography, and semantics).

Briefly, skilled word reading in hearing individuals involves a left-hemisphere dominant system comprised of temporal-parietal cortex which maps visually printed words onto phonological representations (and also binds phonological information to semantic representations), ventral inferior temporal cortex which maps visual features onto orthographic representations and includes the Visual Word Form Area, and the inferior frontal cortex which is involved in both semantic and phonological processing of written words (for review see Dehaene, 2009). Figure 1 provides a schematic illustration of these brain regions.

Figure 1. Schematic of the reading circuit in the left hemisphere.

The neurobiology of reading 5

In addition, much is known about the neural dynamics of word reading in the hearing population as revealed by event-related potentials (ERPs). Specifically, many studies using the visual masked priming paradigm have identified a cascade of ERP components that a) identify low-level visual features of letters (the N/P150), b) process sublexical orthographic and phonological representations (the N250), and c) process whole-word representations (the N400)

(see Grainger & Holcomb, 2009, for review). In visual masked priming, orthographic, phonological, or semantic information from a briefly presented masked prime is integrated with information extracted from the target word, and the prime and target are processed as a single perceptual event – readers are unaware of seeing the prime word. Thus, priming is not subject to strategic effects, and manipulating the type of priming (e.g., phonological, orthographic, or semantic) reveals the linguistic sensitivity of various ERP components. The N170 is another ERP component that has been studied extensively in hearing readers and that is elicited by single written words (not masked). The N170 is left-lateralized in hearing adults (i.e., larger over the left hemisphere) and appears to index fine-tuning of orthographic representations as the N170 amplitude is larger to words than to other visual stimuli, such as symbol strings (e.g., Rossion,

Joyce, Cottrell, & Tarr, 2003; Maurer, Brandeis, & McCandliss, 2005). In this section, recent neuroimaging and electrophysiological studies of word reading are reviewed that examine how the brain processes written words in deaf adults and whether these processes differ from hearing readers.

<2> The Visual Word Form Area

The visual word form area (VWFA) is located along the underside (ventral location) of the inferior temporal cortex (see Figure 1). This region preferentially responds to written words and

The neurobiology of reading 6 encodes abstract orthographic representations (e.g., the neural response is insensitive to case, font, or script; see Dehaene & Cohen, 2011, for review). As hearing children learn to read, this region becomes tuned to print, with better readers exhibiting stronger activation. Adults who are illiterate do not show increased neural activity in the VWFA when viewing written words compared to similar non-word visual stimuli, but VWFA activation increases with increased literacy.

Thus far, most studies have found no difference in VWFA activation between deaf and hearing readers (Aparicio, Gounot, Demont, & Metz-Lutz, 2007; Emmorey, Weisberg,

McCullough, & Petrich, 2013; Waters et al., 2007; Wang, Caramazza, Peele, Han, & Bi, 2015).

The location and extent of activation within the VWFA when recognizing visual words is similar for both groups, but how this region connects to other brain areas differs. Wang et al. (2015) found that the functional connectivity between the VWFA and auditory speech areas in left superior temporal cortex was reduced for congenitally deaf readers, but the connectivity between the VWFA and the frontal and parietal regions of the reading circuit was similar for deaf and hearing readers. The authors concluded that auditory speech experience does not significantly alter the location or response strength of the VWFA.

However, there are mixed results with respect to whether reading skill impacts neural activity within the VWFA for deaf adults. Corina, Lawyer, Hauser, and Hirshorn (2013) found greater activation for more proficient deaf readers, while Emmorey, McCullough, and Weisberg

(2016) found no difference between skilled and less-skilled deaf readers in this region. Studies that compared deaf readers with more skilled hearing readers have also reported no group differences in activation in this region (Aparicio et al., 2007; Wang et al., 2015; Waters et al.,

2007). These latter findings suggest that a primary marker of poor reading in hearing individuals,

The neurobiology of reading 7 namely reduced activation in the VWFA (e.g., Hoeft et al., 2007), may not constitute an indicator of reading skill for deaf individuals. Rather, Emmorey et al. (2016) found that better reading ability in deaf adults was associated with increased neural activation in a region that was in front of (anterior to) the VWFA, when deaf readers made a semantic decision about words. Emmorey et al. speculated that this region may be involved in mapping orthographic word-level representations onto semantic representations (e.g., Purcell, Shea, & Rapp, 2014) and that better deaf readers may have stronger or more finely-tuned links between orthographic and semantic lexical representations. Better deaf readers may activate this interface more consistently or to a greater extent than less-skilled deaf readers when reading for meaning.

Glezer et al. (2018) recently examined whether the same type of neural tuning to orthography in the VWFA that has been observed in typical hearing readers (Glezer et al., 2009) is also found in skilled deaf readers. In these studies, participants performed an ‘oddball’ detection task, pressing a button when the string ‘xyz’ occurred within a word or pseudoword

(this occurred infrequently). Glezer et al. (2018) used a rapid adaptation paradigm with functional magnetic resonance imaging (fMRI-RA) to investigate whether orthographic neural tuning occurs in the absence of robust auditory phonological input during development. In this paradigm, the neural response to a sequentially presented pair of stimuli (e.g., words or pseudowords) is taken to reflect the selectivity of a neuronal population. A weak neural response to the second word indicates adaptation and thus activation of the same neuronal population (i.e., the same representation has been activated), while a strong neural response to the second word indicates activation of distinct neuronal populations (i.e., different representations have been activated) and thus weak adaptation.

The neurobiology of reading 8

Using the fMRI-RA paradigm, Glezer et al. (2018) found that as with hearing readers, neurons in the VWFA of skilled deaf readers were tightly tuned to known written whole words.

Neural adaptation in the VWFA for repeated words (coat – coat) was greater than for words that differed by only one letter (boat – coat) and was greater than for different words (fish – coat), i.e., there was less neural activity for a repeated target word than for target words preceded by primes that were either orthographically similar or completely different. The latter two conditions did not differ from each other in degree of adaptation, indicating that the VWFA is selectively tuned to whole word representations. Further, the neural tuning for pseudowords

(e.g., poat – soat) was broader as found for hearing readers, indicating that the neural adaptation in the VWFA is experience-dependent, that is, occurring only for known whole words. The deaf readers in this study had much poorer phonological awareness abilities than their typical hearing peers, but nonetheless, they exhibited neural selectivity to whole words in the VWFA. This finding indicates that the nature of orthographic tuning in the VWFA is not altered by imprecise phonological representations.

In contrast to what has been found for typical hearing readers, however, Glezer et al.

(2018) found that skilled deaf readers exhibited evidence of orthographic selectivity in the right

VWFA. For hearing readers, it is often not even possible to identify the VWFA in the right hemisphere (the VWFA is typically identified through a separate localizer scan which contrasts words with other visual stimuli). For example, only two out of 20 participants demonstrated a right VWFA in Baker et al. (2007) and only 14 out of 34 in Glezer et al. (2015). The right

VWFA was identified in 10 out of the 12 skilled deaf readers in the Glezer et al. (2018) study.

Thus, word reading for deaf, but not hearing, adults appears to frequently engage the right

The neurobiology of reading 9

VWFA. Although recruitment of the right hemisphere has been implicated in poor reading for hearing people (e.g., Shaywitz et al., 2003), this does not appear to be the case for deaf readers.

<2> The Dorsal Network of the Reading Circuit

As schematized in Figure 1, the temporal-parietal and inferior frontal cortices constitute the dorsal network of the reading circuit. Several studies have now shown that during speech-based phonological tasks, inferior parietal cortex is more engaged for deaf than hearing readers

(Aparicio et al., 2007; Emmorey et al., 2013; Li et al., 2014), although this region is not more active for deaf readers when participants perform a semantic task (Emmorey et al., 2013).

Increased activation in parietal cortex may reflect increased attention due to the difficulty of speech-based phonological tasks for deaf participants, who tend to perform worse than hearing participants on these tasks (e.g., rhyme judgments, syllable counting). In fact, when

MacSweeney, Brammer, Waters, and Goswami (2009) compared subsets of deaf and hearing participants who performed similarly on a rhyme judgment task (using picture, rather than word stimuli), no activation differences within parietal cortex were observed between groups.

Glezer et al. (2016) found that for typical hearing readers, a region within temporal- parietal cortex (TPC) is finely tuned to phonological representations when reading single words, but this region shows only weak selectivity to phonology in skilled deaf readers (Glezer et al.,

2018). Glezer et al. (2018) functionally localized a region within TPC that was active when deaf readers made phonological decisions about words (rhyme judgments) and then used the rapid adaptation paradigm to determine whether homophones (pane – pain) showed the same degree of adaptation as repeated words (pain – pain), as found in hearing readers. When asked to make a phonological decision (detect an infrequent two syllable word, e.g., lion), deaf readers did not

The neurobiology of reading 10 show equal adaptation for homophones and repeated words unlike hearing readers. Rather, deaf readers exhibited weaker adaptation to the homophone target (pane – pain) than to the repeated target (pain – pain). Nonetheless, deaf readers exhibited more adaptation in this region for homophone word pairs than for control words pairs that did not rhyme but shared letters (nail – pain), indicating some selectivity to phonology in the TPC. Glezer et al. (2018) concluded that this pattern of findings indicates that a) deaf readers activate phonology when making phonological decisions about written words, but they rely on more coarse-grained representations compared to hearing readers, and b) fine-grained phonological representations are not a necessary pre-condition for reading success.

The inferior frontal cortex (IFC) has been implicated in both semantic and phonological processing of written words, but whether this region is similarly engaged for deaf and hearing readers is somewhat unclear. Aparicio et al. (2007) found that both the left and right IFC were activated more in deaf than hearing readers when performing a phonological task (rhyme judgments). When performing a lexical decision task, a different pattern was observed: greater left IFC activation was found for hearing than deaf readers, while deaf readers exhibited greater activation than hearing readers in the right IFC. However, these differences could have been due differences in reading skill. Emmorey et al. (2013) controlled for reading ability and found that deaf and hearing readers activated left IFC to a similar degree when performing a semantic task

(concreteness judgment) or a phonological task (syllable counting). Interestingly, when these two tasks were directly contrasted, hearing readers showed no difference in neural activity within left

IFC, while deaf readers exhibited neural segregation for semantic and phonological processing: greater activation in the anterior region for semantic processing and greater activation in the posterior region for phonological processing. To account for this pattern, Emmorey et al. (2013)

The neurobiology of reading 11 suggested that deaf readers may be less likely to implicitly activate phonological representations when performing a semantic task and may also be less likely to robustly activate semantic representations when performing a difficult phonological task. Hearing readers, on the other , may automatically activate phonology when reading words for both tasks, leading to less segregation and more neural overlap in left IFC for semantic and phonological processing.

The evidence to date suggests that reading skill may modulate activity within left IFC.

Corina et al. (2013) found that more proficient deaf readers exhibited activation in left IFC during an implicit word reading task (detect a “tall” letter). Emmorey et al. (2016) found that skilled deaf readers exhibited more activation in left IFC than less-skilled deaf readers when making semantic judgments. Interestingly, Emmorey et al. (2016) also reported a correlation between reading ability and neural activity within right IFC, with better readers exhibiting stronger activity during the semantic task. Again, recruitment of the right hemisphere during word reading does not appear to be maladaptive for deaf readers.

When performing a phonological task (syllable counting), Emmorey et al. (2016) found that deaf readers who were more accurate on the task exhibited greater activation in left IFC.

Similarly, MacSweeney et al. (2009) found that left IFC was more activated for deaf than hearing adults when performing rhyme judgments with picture stimuli. MacSweeney and colleagues suggested that deaf readers may rely more on articulatory than auditory representations when performing speech-based phonological judgments. The posterior region of left IFC is involved in the articulatory coding of speech and may therefore be recruited to a greater extend by deaf than hearing individuals for such tasks.

The fMRI-RA study by Glezer et al. (2018) revealed that skilled deaf readers exhibited the same response profile in left IFC as typical hearing readers from Glezer et al. (2016). Both

The neurobiology of reading 12 groups showed selectivity to whole written words with no adaptation for homophones, which indicates that this region is more responsive to orthographic than to phonological structure. Thus, although the overall neural response within left IFC (as measured by the averaged fMRI BOLD response) is modulated by skill level and type of task, the underlying neural specificity (as measured with fMRI-RA) is highly tuned to whole written words and appears to be unaffected by phonological ability. Glezer et al. (2018) suggested that the previously observed differences between deaf and hearing readers in left IFC might be due to differences in other cognitive skills

(e.g., attention, task strategies) rather than to differences in the underlying word representations or the degree of orthographic or phonological coding.

In sum, the neuroimaging results to date indicate that deaf readers develop a neural reading circuit that is similar to hearing readers, but the system may be more bilateral with skilled deaf readers recruiting the right IFC and right VWFA. Skilled deaf readers exhibit neural tuning to orthography in the left IFC and VWFA and coarse tuning to phonology in the TPC. The neural connectivity from the left VWFA to IFC does not differ between deaf and hearing readers

(Wang et al., 2015), and better deaf readers may have developed a stronger interface between orthographic words and semantic representations (Emmorey et al., 2016).

<2> The Time Course of Visual Word Recognition

Words are recognized very rapidly by typical readers – roughly two to five words per second.

ERPs provide a window into the rapidly unfolding stages of visual word recognition because of the high temporal resolution of this methodology. In this section, recent ERP studies are reviewed that illuminate both similarities and differences in the temporal neural dynamics of word recognition for deaf and hearing readers.

The neurobiology of reading 13

<3> The N170 Response and the Phonological Mapping Hypothesis

The N170 response to words and word-like stimuli is a negative-going wave that peaks about

170 milliseconds (ms) after the onset of the visual stimulus and has a larger amplitude over the left hemisphere. Rossion et al. (2003) proposed that the cortical areas producing N170 effects are tuned to preferentially process a specific domain of knowledge (e.g., words or faces) over the course of massive experience. One possible explanation for the left-lateralization of the N170 in hearing readers is the phonological mapping hypothesis which proposes that the emergence of left hemisphere processing of visual words is the result of linking written words to left hemisphere auditory language regions in order to map orthographic onto phonological representations when learning to read (McCandliss & Noble, 2003). Recently, Sacchi and Laszlo

(2016) provided support for this hypothesis by showing that in hearing children (ages 10 – 11 years) the degree of left lateralization of the N170 to words was predicted by phonological awareness (but not by vocabulary size).

Emmorey et al. (2017) tested the phonological mapping hypothesis by examining the laterality of the N170 response in deaf and hearing adults who were matched on reading ability, but who differed in phonological awareness, with the deaf readers performing significantly worse on the Hirshorn et al. (2015) tests of phonological ability. Hearing readers exhibited the expected left-lateralized N170 (greater negativity for words than symbol strings), while deaf readers exhibited a much more bilateral N170 response at temporal electrode sites. This result supports the hypothesis that the left-lateralized N170 to words at sites near auditory language regions in hearing readers arises from the developmental process of consistently mapping orthographic representations to precise auditory-based phonological representations of speech.

The neurobiology of reading 14

Importantly, linear mixed effects regression analyses revealed that the relation between reading ability and N170 amplitude differed for deaf and hearing readers. Better reading ability was associated with a larger right hemisphere N170 over occipital sites for deaf readers, but for hearing readers better reading ability was associated with a smaller right hemisphere N170.

Since the groups were matched on reading ability, this finding suggests that the optimal neural dynamics of visual word recognition differs for skilled deaf and hearing readers. For hearing readers, increased engagement of the right hemisphere was associated with poorer reading ability, consistent with other studies. Recruitment of the right hemisphere has been argued to be maladaptive for hearing readers because the right hemisphere may process words more like visual objects, thus resulting in less differentiated orthographic representations (Laszlo & Sacchi,

2015). For deaf readers, in contrast, recruitment of the right hemisphere appears to be beneficial.

Emmorey et al. (2017) also found that better spelling ability was associated with a larger right hemisphere N170 for deaf readers. Thus, recruitment of right hemisphere regions is not indicative of poorly specified orthographic representations in deaf readers, possibly because orthographic representations are not fine-tuned by left-lateralized phonological mappings.

<3> The N250 Response and Sublexical Processes

The N250 component is a negative-going wave that peaks around 250 ms after target word onset in a visual masked priming paradigm. The N250 is hypothesized to reflect automatic sublexical processes that engage orthographic and phonological codes (Grainger and Holcomb, 2009).

Gutierrez-Sigut, Vergara-Martínez, and Perea (2017) used ERPs and pseudohomophone masked priming in an orthographically transparent language (Spanish) to investigate whether congenitally deaf readers automatically activate phonological representations at an early

The neurobiology of reading 15 sublexical stage of word recognition. Pseudohomophones are nonwords that sound like real words if pronounced (e.g., werk, brane). Gutierrez-Sigut et al. (2017) found evidence for pseudohomophone priming in both their behavioral and ERP results. Deaf readers made faster lexical decisions when the target word was preceded by a pseudohomophone prime (koral –

CORAL) than an orthographically-related nonword prime (toral – CORAL), and the N250 amplitude was reduced (indicating priming) for the pseudohomophone compared to the orthographic control condition. The size of the phonological priming effect was similar for deaf and hearing readers who had similar phonological skills (as assessed by a syllable counting task).

However, the size of the N250 priming effect correlated with sentence reading ability and phonological awareness only for the hearing readers. Gutierrez-Sigut and colleagues concluded that the use of sub-lexical phonological codes may be an important contributor to reading ability for hearing but not for deaf readers.

However, masked phonological priming – and thus early automatic activation of phonological representations – in deaf adults may be restricted to with transparent orthography, such as Spanish. Masked phonological priming effects have not been observed for deaf readers of more opaque languages, such as French (Bélanger, Baum, & Mayberry, 2012) and English (Cripps, McBride, & Forster, 2005). Although these studies revealed orthographic priming effects, phonological priming was not observed for either skilled or less-skilled deaf readers.

Nonetheless, even for Spanish deaf readers there is evidence that the early stages of visual word recognition may not be influenced by phonological feedback from lexical level representations. Perea, Marcel, and Vergara-Martinez (2016) examined masked priming for word and nonword pairs that either matched or mismatched in case for a group of deaf readers who

The neurobiology of reading 16 were native or early Spanish Sign Language users and who self-reported relatively poor speech ability. For hearing readers, previous research had shown that the physical identity between a masked prime and target gives rise to a priming advantage for nonwords. For example, GEDA-

GEDA results in greater priming than geda-GEDA, but this pattern is not observed for real words, that is, the amount of priming for REAL-REAL is the same as for real-REAL (Vergara-

Martinez, Gomez, & Jiménez, 2015). The explanation for this priming difference is that words, unlike nonwords, benefit from top-down feedback from lexical level representations. The hypothesis is that top-down feedback from the lexical representations creates stable phonological codes during the early stages of visual word processing, which leads to similar neural correlates for REAL-REAL and real-REAL in masked priming experiments (Perea et al., 2016). In contrast, nonwords do not have stored lexical representations and do not receive top-down feedback; thus, matched-case pairs that are visually similar (GEDA-GEDA) exhibit a processing advantage over mismatched-case pairs (geda-GEDA). Perea et al. (2016) found that in contrast to hearing readers, deaf readers exhibited greater masked priming for matched-case than mismatched-case pairs for both word and nonword stimuli. The authors concluded that the early stages of visual word recognition are distinct for deaf and hearing readers, most likely due to differences in the amount of phonological feedback from higher lexical levels.

<3> The N400 Response and Lexical-level Processes

The N400 is a negative-going wave that peaks around 400 ms after stimulus onset and is hypothesized to reflect processes within the whole-word lexicon. The N400 component is sensitive to semantic, orthographic, and phonological manipulations, with different manipulations modulating the scalp distribution of the N400. With respect to semantics, Meade

The neurobiology of reading 17 et al. (2017) reported faster response times and smaller amplitude N400s (indicating priming) to target words in semantically-related pairs (mouse – rat) than unrelated pairs (accent – bath) for deaf readers. In this study, the task was to decide whether the two English words were semantically related or not. Meade et al. found that the size and timing of the N400 effect appeared very similar for the deaf and hearing readers. In addition, Osmond et al. (2018) found the expected N400 modulation associated with concreteness (i.e., more negativity for concrete than abstract words) when deaf readers made a Go/No-Go lexical decision (i.e., press a button when an occasional nonword is detected). Together, these findings indicate similar lexical- semantic processing at the whole-word level for deaf and hearing readers.

The masked priming study by Gutierrez-Sigut et al. (2017) described above also found a reduced N400 for pseudohomophone prime-target pairs (koral – CORAL) than for orthographic control pairs (toral – CORAL) in both deaf and hearing Spanish readers. This N400 effect can be interpreted as reflecting activation of whole-word phonological representations by pseudohomophone primes. MacSweeney, Goswami, and Neville (2013) found a reduced negativity to target rhyming words (bear – fair) compared to nonrhyming targets (scar – fair) within the N400 time window for deaf readers who scored above chance on the difficult rhyme judgment task (9 out of 15 participants). This effect was more bilateral for the deaf than the hearing group, which MacSweeney et al. (2013) interpreted as an effect of learning English as second language by the deaf participants who were all native users of British Sign Language.

However, another possible interpretation is that the stronger lateralization observed for rhyme priming in hearing readers is related to a stronger association with auditory phonological word representations in the left hemisphere, i.e. a word-level consequence of the phonological mapping hypothesis.

The neurobiology of reading 18

Finally, Meade et al. (2018a) examined the N400 component in relation to orthographic neighbor priming in deaf and hearing readers. Orthographic neighbors of a word (e.g., node) are other words of the same length that differ by only one letter (e.g., note, nose, tote). Masked priming studies with hearing readers have found that word targets preceded by masked neighbor word primes (e.g., note – NODE) yield slower lexical decision times than unrelated primes (e.g., kiss – NODE) (e.g., Andrews & Hersch, 2010). Further, targets following masked neighbor word primes elicit a larger N400 negativity than those following unrelated word primes (Meade et al.,

2018b). These findings are interpreted as reflecting lexical competition (lateral inhibition) between the orthographic neighbor prime and the target word. Importantly, these effects do not occur with nonword primes (e.g., nade – NODE) which do not have lexical representations and therefore cannot engage in lexical competition. Meade et al. (2018a) compared the processing of target words preceded by masked neighboring prime words or nonwords in deaf and hearing readers. Both groups exhibited slower lexical decision times and a larger N400 negativity for targets following words primes than nonword primes, indicative of lexical competition.

However, the distribution of the N400 was more anterior for the hearing readers, which Meade et al. (2018a) interpreted as reflecting stronger activation of, and greater competition among phonological representations (phonological effects tend to have a more anterior scalp distribution than orthographic effects). This pattern of results suggests that lexical competition characterizes visual word recognition for both deaf and hearing readers, but competition may be restricted to orthographic representations for deaf readers.

In sum, the neural time course of visual word recognition appears to be quite similar for deaf and hearing adults who are proficient readers, with both groups exhibiting N170, N250, and

N400 ERP components when processing single words. However, the amplitude and scalp

The neurobiology of reading 19 distribution of these components can differ between deaf and hearing individuals. Deaf readers exhibit a more bilateral distribution for the N170 and the phonological N400 (the “N450”;

MacSweeney et al., 2013). Whether deaf readers automatically activate phonological representations, as indexed by a reduced N250 with pseudohomophone primes, may depend upon the orthographic transparency of the language. In addition, the results suggest that a more bilateral N170 (an ERP component that indexes orthographic tuning) is associated with better reading skill for deaf individuals, but poorer reading ability for hearing people.

<1> Sentence Reading

Much less research has been conducted on the neural substrates that support sentence reading in deaf individuals. The first ERP study with sentence stimuli was conducted by Neville, Mills, and

Lawson (1992) in which typical hearing readers were compared with congenitally deaf readers who were native signers and who scored below the hearing readers on tests of reading ability and knowledge of English grammar. As is typical of ERP studies, words were presented one at a time in the center of the screen to avoid artifacts in the EEG caused by eye movements and with a short delay between words (in this case 500 ms, with words presented for 200 ms), which provides enough time to prevent overlapping ERPs to adjacent words. Both hearing and deaf readers exhibited the same neural sensitivity to semantic information, displaying a larger N400 response (with a similar latency and amplitude) for final words that violated semantic expectations (e.g., The winning candidate was preparing his acceptance wood) than to expected final words (e.g., Each April we must pay our income tax). The ERP response to open class words (e.g., winning, candidate) was also similar for the two groups, but the neural response to closed class words (e.g., the, was) differed. For hearing readers, closed class words elicited a

The neurobiology of reading 20 left-lateralized anterior N280 response that was absent for deaf readers. Neville et al. (1992) and subsequent studies have suggested that this left anterior negativity is associated with syntactic processing related to the grammatical functions of closed class words. Poorer grammatical knowledge of English may account for the lack of an N280 response in the deaf readers. Neville et al. (1992) reported that a small number of deaf readers who scored well on tests of English grammar were found to exhibit a left-lateralized N280 response to closed class words. Weber-

Fox and Neville (2001) report a similar result with hearing Chinese-English bilinguals who showed a weaker N280 response and had poorer English grammatical knowledge compared to monolingual speakers.

More recently, Mehravari et al. (2017) investigated ERP responses to semantic violations

(e.g., The huge house still listen to my aunt) which as noted above generate a large N400, and to syntactic violations (e.g., The huge house still belong to my aunt) which generate a P600, a positive-going wave that peaks about 600 ms after the grammatical error. The deaf participants in this study were all prelingually and profoundly deaf and varied in their age of exposure and use of a signed language, although the majority of participants were signers. Overall, the deaf participants had poorer reading comprehension scores than the hearing participants, but more than half of the deaf group (24/42) performed within the same range as the hearing group. As found by Neville et al. (1992), both deaf and hearing readers exhibited a large N400 response to semantic violations. Similarly, Skotara et al. (2011; 2012) found that deaf native and non-native signers of German Sign Language (DGS) exhibited an N400 response to semantic violations in written German sentences that was identical to that observed in hearing German readers. These results indicate that similar neural processes are involved in semantic processing for deaf and hearing readers at the sentence level. However, Mehravari et al. (2017) found that only the

The neurobiology of reading 21 hearing readers exhibited a significant P600 to syntactic violations. Importantly, this pattern also held for the subgroup of skilled deaf readers who were matched with the hearing participants on reading ability.

Further, Mehravari et al. (2017) found that deaf and hearing readers exhibited different relationships between reading skill and sensitivity to semantic and grammatical information, and this differential pattern also held for the subgroup of deaf and hearing participants with similar reading ability. Specifically, the size of the N400 effect was strongly correlated with reading ability for deaf but not hearing individuals. A similar finding was reported by Gutierrez-Sigut et al. (2017) for the N400 pseudohomophone priming effect with single words; that is, the size of the N400 effect correlated with reading ability for deaf but not hearing participants. In contrast, the size of the P600 response to syntactic violations was related to reading ability for the hearing but not the deaf participants. Together, these results suggest that equally proficient deaf and hearing readers rely on different types of linguistic information when reading sentences. The best deaf readers rely primarily on semantic information, while the best hearing readers rely on both semantic and syntactic information. Further, for hearing, but not deaf readers there is a link between reading skill and sentence-level grammatical processing.

However, age of sign language acquisition may play a role in whether a robust P600 effect is observed for syntactic violations in deaf readers. The small P600 effect observed by

Mehravari et al. (2017) may have occurred because most of the participants (90%) were not exposed to ASL in early childhood and thus were at risk for language deprivation. Supporting this possibility, Skotara and colleagues (2011; 2012) found that deaf native DGS signers exhibited a P600 response for written sentences with German verb agreement violations, which was comparable to the P600 effect observed for hearing readers. Further, non-native deaf signers

The neurobiology of reading 22 who were exposed to DGS at the time of school enrollment exhibited much weaker P600 effects

(Skotara et al., 2012). Visual inspection of the few native ASL signers (n = 4) from the

Mehravari et al. study suggested that they showed more P600-like activity than the group of deaf readers as whole. Thus, it is possible that the lack of a robust P600 effect in the deaf readers studied by Mehravari et al. could have been due, at least in part, to a lack of accessible language input during early development.

Neville and her colleagues were also the first to use fMRI to examine sentence reading in deaf adults (all native ASL signers). Neville et al. (1998) contrasted the neural response to reading simple declarative English sentences with viewing “consonant-string sentences” that were presented one item at a time (600 ms/item) in the center of a screen. Participants were asked at the end of a run (a set of stimuli) whether they had seen a specific sentence or sequence of consonant-string items. Deaf readers exhibited bilateral activation in the dorsal network of the reading circuit (inferior frontal and parietal cortices), while activation was strongly left- lateralized for the hearing readers. The deaf readers performed worse than the hearing readers on an English grammaticality judgment test, and Neville et al. (1998) hypothesized that greater right hemisphere involvement for the deaf readers may have been due to poorer English skills.

Controlling for reading ability, Hirshorn et al. (2014) examined sentence reading using fMRI in two groups of congenitally/prelingually deaf adults (native signers and ‘oral’ deaf adults who had acquired only a spoken language) and hearing monolingual English adults. Sentences were presented as in Neville et al. (1998) and were contrasted with “false font sentences” created using the Wingdings font. To ensure participants were reading for meaning, sentences were occasionally followed by a picture, and participants indicated whether it matched the sentence or not (these trials were not analyzed). Similar to Neville et al. (1998), participants knew they

The neurobiology of reading 23 would be asked at the end of the scanning session whether they had seen a given sentence or false font string (thus, memory systems were likely recruited as well). A conjunction analysis identified regions of activation that were greater for sentences compared to false font strings in all three groups. This analysis revealed bilateral activation in the inferior frontal cortices and in temporal cortices (with greater activation on the left). Importantly, significant group differences were observed in just two regions: bilateral superior temporal gyri (STG; specifically, primary and secondary auditory regions) and a region overlapping with the VWFA in inferior temporal cortex.

Both deaf groups exhibited greater activation in left and right STG compared to hearing readers, and activation in these regions did not differ between the two deaf groups. This result suggests that absent or reduced auditory input during development leads to reorganization of function within auditory cortices when deaf individuals read, whether they are signers or nonsigners (see also Cardin et al., 2016). A regression analysis revealed that greater dB loss was associated with greater activation in left STG auditory cortex, but no behavioral measures or demographic variables were linked to neural activation in right STG. This pattern is consistent with a recent fMRI study with congenitally deaf signers by Twomey et al. (2017). These authors found that activation in left STG was sensitive to linguistic task demands, but activation in right

STG was task-independent; no activation in STG (left or right) was observed for hearing signers performing the same tasks. Increased activation within auditory cortices when deaf people read

(or comprehend sign language) may be a result of functional neural changes that result from deafness, rather than to differences in linguistic abilities.

Hirshorn et al. (2014) also conducted a functional connectivity analysis which revealed stronger connectivity for deaf readers (both groups) from left auditory cortex to anterior inferior

The neurobiology of reading 24 frontal cortex (BA 45), an area that is associated with semantic processing. This finding supports the emerging view that proficient deaf readers rely more on semantic information than their hearing peers (regardless of sign language experience). In contrast, connectivity from left STG to regions associated with a speech-based processing region (left postcentral gyrus) was greater for both oral deaf and hearing speakers compared to deaf signers. In addition, connectivity from posterior IFG (BA 44; a region associated with phonological processing) to the inferior temporal lobe was greater for both oral deaf and hearing readers compared to deaf signers. This finding indicates that use of speech (by hearing or deaf individuals) strengthens the connection between visual word processing (in inferior temporal cortex) and phonological speech-based processing

(in inferior frontal cortex) during reading comprehension.

With respect to group differences in left VWFA activation, Hirshorn et al. (2014) found greater activation for oral deaf and hearing readers (who did not differ from each other) compared to deaf native signers. Given that the three groups were matched in reading ability, increased VWFA activation cannot be due to weaker reading abilities in the oral deaf and hearing readers. This finding was unexpected, and Hirshorn et al. (2014) did not provide a possible explanation. One speculative possibility is that increased VWFA in these two groups reflects increased top-down modulation from speech-based phonological regions within inferior frontal cortex. Such top-down modulation may be greater during sentence than during word reading where previous studies have found no difference in VWFA activation between hearing and deaf readers.

Finally, Moreno, Limousin, Dehaene, and Pallier (2018) used fMRI to compare sentence processing in written French and French Sign Language (LSF) for deaf adults who acquired LSF as their first language. The results with written French were compared to results from a previous

The neurobiology of reading 25 study using the same materials with hearing French readers (Pallier, Devauchelle, & Dehaene,

2011). The participants’ task was to respond to occasional probe sentences that explicitly asked them to press a button. Moreno et al. (2018) found that reading sentences compared to lists of words engaged a strongly left-lateralized network for deaf readers, including middle and inferior frontal cortices and superior and middle temporal cortices, replicating the results found for hearing readers (Pallier et al., 2011). Consistent with Hirshorn et al. (2014), neural activation was also observed in right superior temporal cortex (including auditory regions) for reading sentences compared to word lists. Lexical ability (as assessed by accuracy on a lexical decision task) was positively linked to neural activation in the dorsal reading network (left inferior frontal cortex and left inferior parietal cortex). Interestingly, these positive correlations were also significant in the homologous right hemisphere regions, which again suggests that recruiting the right hemisphere during reading comprehension is a marker of better, not poorer, deaf readers.

<1> Conclusions and Future Directions

Overall, the evidence reviewed here indicates both similarities and differences in the neurobiology of reading for deaf and hearing adults. Evidence from neuroimaging studies indicates that deaf readers engage the same left-lateralized neural circuit as hearing readers when reading written words and sentences (i.e., inferior frontal cortex, temporal-parietal cortex, and inferior temporal cortex; Figure 1). Evidence from electrophysiological studies indicates that the same ERP components are generally observed when deaf adults process written words and sentences (e.g., N170, N250, and N400). However, ERP responses associated with grammatical processing appear to be weaker for deaf than hearing readers, even when matched for reading ability (i.e., the N280 for closed class words and the P600 for syntactic violations). An important

The neurobiology of reading 26 avenue for future research is to determine what factors influence the neural sensitivity to syntactic information in deaf readers. For example, the cross-linguistic dissimilarity (lack of transfer) between the morphosyntax of signed and spoken languages could impact the size of the

P600 for deaf readers who are signers (e.g., “verb agreement” in ASL is very different from subject-verb agreement in English). In addition, it is important to examine whether the age of sign language acquisition influences the neural subsystems that support syntactic processing in deaf readers.

Several lines of evidence suggest that skilled deaf readers rely more on neural regions involved in semantic processing when comprehending sentences or single written words compared to their hearing peers. First, reading ability is more strongly correlated with the amplitude of the N400 effect to semantic violations for deaf than hearing readers (Mehravari et al., 2017). Second, when reading sentences deaf individuals exhibit greater functional connectivity than their reading-matched hearing peers between a semantic processing region in left inferior frontal cortex and auditory cortex (Hirshorn et al., 2014). Third, better deaf readers exhibit stronger activation in a region in inferior temporal cortex that interfaces between orthographic and semantic word-level representations (Emmorey et al., 2016; Purcell et al.,

2014). An important future direction is to investigate whether greater reliance on semantic information represents a “good enough” approach to reading for deaf readers. According to the good-enough hypothesis (Ferreira, Bailey, & Ferraro, 2002), readers analyze grammatical structure only as much as necessary for the task at hand, and it is possible that deaf readers are more likely to extract meaning from sentences without determining the precise grammatical role served by every word.

The neurobiology of reading 27

Whether deaf readers automatically activate phonological representations when reading words (or sentences) is still unclear. Gutierrez-Sigut et al. (2017) found ERP evidence for implicit activation of phonological representations (i.e., reduced N250 negativity for masked pseudohomophone primes) for deaf readers of Spanish, a language with transparent orthography, but others have failed to find evidence for implicit activation of phonological codes during sentence or word reading for languages with more opaque orthographies. Glezer et al. (2018) found that skilled deaf readers exhibited phonological tuning to words in temporal-parietal cortex, although the phonological selectivity was more coarse-grained than that observed for typical hearing readers. Increased activation in this phonological processing region is often observed when deaf readers perform a speech-based phonological task, but this increase in neural activity could be due to the task difficulty (MacSweeney et al., 2009).

Finally, there is growing evidence that activation in homologous right hemisphere regions of the reading circuit is not evidence of poor reading skill in deaf adults and is sometimes indicative of better reading ability. Emmorey et al. (2017) found that a larger amplitude N170 to written words was associated with better reading ability for deaf (but not hearing) readers, and

Glezer et al. (2018a) found evidence for whole-word orthographic tuning in the right VWFA for skilled deaf readers, which has not been reported for typical hearing readers. Emmorey et al.

(2016) found that activation within right inferior frontal cortex was positively associated with reading skill when deaf readers made a semantic decision to words. At the sentence level,

Moreno et al. (2018) found that activation in right (and left) superior temporal cortex (including auditory regions) was positively correlated with lexical ability for deaf readers. Hirshorn et al.

(2014) also found greater bilateral activation in auditory regions when deaf readers (signers and non-signers) comprehended sentences compared to their reading-matched hearing peers. One

The neurobiology of reading 28 avenue for future research is to investigate the role that these right hemisphere regions play in reading comprehension for deaf individuals, as well as to determine why these regions are engaged for deaf but not hearing readers (e.g., the possible role of phonological mapping in shifting activation to the left hemisphere).

In sum, emerging evidence indicates that the optimal neural end-state for skilled reading is not identical for deaf and hearing adults. The reading circuit in deaf signing adults who have achieved reading success appears to have effectively adapted to reduced auditory phonological input during development by increasing reliance on orthographic-to-semantic mappings and engaging homologous right hemisphere regions. The nature of these adaptations creates a neural system that shares core components with the system engaged by hearing readers but may lead to different neural markers of reading skill.

An open question is how this optimal neural end-state is achieved in deaf readers and whether neural markers of reading skill might change across development. For example, it is possible that the role of spoken language phonology shifts from initially supporting deaf children’s entrance into the reading system to playing a less critical role as the orthographic lexicon grows and other processes begin to support reading (e.g., morphological and semantic knowledge). It is possible that both deaf and hearing beginning readers engage right hemisphere regions initially and as reading skills develop, deaf children continue to maintain these right hemisphere regions as part of the reading circuit while hearing children shift to greater reliance on left hemisphere auditory phonological regions. Understanding the behavioral and neural patterns associated with skilled reading in deaf adults provides clues to how reading skill develops, but future research is needed to determine the nature of the path to skilled reading in deaf individuals.

The neurobiology of reading 29

The neurobiology of reading 30

References

Andrews, S., & Hersch, J. (2010). Lexical precision in skilled readers: Individual differences in

masked neighbor priming. Journal of Experimental : General, 139(2), 299.

Aparicio, M., Gounot, D., Demont, E., & Metz-Lutz, M. N. (2007). Phonological processing in

relation to reading: an fMRI study in deaf readers. Neuroimage, 35(3), 1303-1316.

Bélanger, N. N., Baum, S. R., & Mayberry, R. I. (2012). Reading difficulties in adult deaf

readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16(3),

263-285. doi:10.1080/10888438.2011.568555

Baker, C. I., Liu, J., Wald, L. L., Kwong, K. K., Benner, T., & Kanwisher, N. (2007). Visual

word processing and experiential origins of functional selectivity in extrastriate

cortex. Proceedings of the National Academy of Sciences, 104(21), 9087-9092.

Cardin, V., Smittenaar, R. C., Orfanidou, E., Rönnberg, J., Capek, C. M., Rudner, M., & Woll,

B. (2016). Differential activity in Heschl's gyrus between deaf and hearing individuals is

due to auditory deprivation rather than language modality. Neuroimage, 124, 96-106.

Corina, D. P., Lawyer, L. A., Hauser, P., & Hirshorn, E. (2013). Lexical processing in deaf

readers: an fMRI investigation of reading proficiency. PLoS One, 8(1), e54696

Cripps, J. H., McBride, K. A., & Forster, K. I. (2005). Lexical processing with deaf and hearing:

Phonology and orthographic masked priming. Arizona Working Papers in Second

Language Acquisition and Teaching, 12, 31-44.

Dehaene, S. (2009). Reading in the brain: The new science of how we read. Penguin.

Dehaene, S., & Cohen, L. (2011). The unique role of the visual word form area in reading.

Trends in Cognitive Science, 15, 254-262.

The neurobiology of reading 31

Emmorey, K., Weisberg, J., McCullough, S., & Petrich, J. A. (2013). Mapping the reading

circuitry for skilled deaf readers: an fMRI study of semantic and phonological

processing. Brain and language, 126(2), 169-180.

Emmorey, K., McCullough, S., & Weisberg, J. (2016). The neural underpinnings of reading skill

in deaf adults. Brain and language, 160, 11-20.

Ferreira, F., Bailey, K. G. D., & Ferraro, V. (2002). Good-enough representations in language

comprehension. Current Directions in Psychological Science, 11, 11–15.

Glezer, L. S., Jiang, X., & Riesenhuber, M. (2009). Evidence for highly selective neuronal tuning

to whole words in the “visual word form area”. Neuron, 62(2), 199-204.

Glezer, L. S., Kim, J., Rule, J., Jiang, X., & Riesenhuber, M. (2015). Adding words to the brain's

visual dictionary: novel word learning selectively sharpens orthographic representations

in the VWFA. Journal of Neuroscience, 35(12), 4965-4972.

Glezer, L. S., Eden, G., Jiang, X., Luetje, M., Napoliello, E., Kim, J., & Riesenhuber, M. (2016).

Uncovering phonological and orthographic selectivity across the reading network using

fMRI-RA. NeuroImage, 138, 248-256.

Glezer, L. S., Weisberg, J., Farnady, C. O., McCullough, S., Midgley, K.J., Holcomb, P.J., &

Emmorey, K. (2018). Orthographic and phonological selectivity across the reading

system in deaf skilled readers. Neuropsychologia, 117, 500-512.

Glickman, N. S., & Hall, W. C. (Eds.). (2018). Language Deprivation and Deaf Mental Health.

Routledge.

Grainger, J. & Holcomb, P.J. (2009). Watching the word go by: On the time course of

component processes in visual word recognition. Language and compass,

3(1), 128-156.

The neurobiology of reading 32

Gutierrez-Sigut, E., Vergara-Martínez, M., & Perea, M. (2017). Early use of phonological codes

in deaf readers: an ERP study. Neuropsychologia, 106, 261-279.

doi:10.1016/j.neuropsychologia.2017.10.006

Hirshorn, E. A., Dye, M. W. D., Hauser, P., Supalla, T. R., & Bavelier, D. (2015). The

contribution of phonological knowledge, memory, and language background to reading

comprehension in deaf populations. Frontiers in Psychology, 6(1153).

Hoeft, F., Meyler, A., Hernandez, A., Juel, C., Taylor-Hill, H., Martindale, J. L., ... & Deutsch,

G. K. (2007). Functional and morphometric brain dissociation between dyslexia and

reading ability. Proceedings of the National Academy of Sciences, 104(10), 4234-4239.

Hoeft, F., Meyler, A., Hernandez, A., Juel, C., Taylor-Hill, H., Martindale, J. L., ... & Deutsch,

G. K. (2007). Functional and morphometric brain dissociation between dyslexia and

reading ability. Proceedings of the National Academy of Sciences, 104(10), 4234-4239.

Humphries, T., Kushalnagar, P., Mathur, G., Napoli, D. J., Padden, C., Rathmann, C., & Smith,

S. R. (2012). Language acquisition for deaf children: Reducing the harms of zero

tolerance to the use of alternative approaches. Harm Reduction Journal, 9(1), 16.

Koo, D., Kelly, L, LaSasso, C., & Eden, G. (2008). Phonological awareness and short-term

memory in hearing and deaf individuals of different communication backgrounds.

Learning, Skill, Acquisition, Reading and Dyslexia, 1145, 83-99.

Pallier, C., Devauchelle, A. D., & Dehaene, S. (2011). Cortical representation of the constituent

structure of sentences. Proceedings of the National Academy of Sciences, 201018711.

Perea, M., Marcet, A., & Vergara-Martínez, M. (2016). Phonological-lexical feedback during

early abstract encoding: The case of deaf readers. PloS one, 11(1), e0146265.

The neurobiology of reading 33

Pugh, K. R., Mencl, W. E., Jenner, A. R., Katz, L., Frost, S. J., Lee, J. R., ... & Shaywitz, B. A.

(2001). Neurobiological studies of reading and reading disability. Journal of

communication disorders, 34(6), 479-492.

Purcell, J. J., Shea, J., & Rapp, B. (2014). Beyond the visual word form area: The orthography–

semantics interface in spelling and reading. Cognitive neuropsychology, 31(5-6), 482-

510.

McCandliss, B.D. and Noble, K.G. (2003). The development of reading impairment: a cognitive

neuroscience model. Mental Retardation and Developmental Disabilities: Research

Reviews, 9: 196 –205.

MacSweeney, M., Brammer, M. J., Waters, D., & Goswami, U. (2009). Enhanced activation of

the left inferior frontal gyrus in deaf and dyslexic adults during rhyming. Brain, 132(7),

1928-1940.

Maurer, U., Brandeis, D., & McCandliss, B.D. (2005). Fast, visual specialization for reading in

English revealed by the topography of the N170 ERP response. Behavioral and Brain

Functions, 1 (1), 13.

Meade, G., Midgley, K., Sevickova Sehyr, Z., Holcomb, P., & Emmorey, K. (2017). Implicit co-

activation of in deaf readers: An ERP study. Brain and

Language, 170, 5061. http://dx.doi.org/10.1016/j.bandl.2017.03.004

Meade, G., Grainger, J., Midgley, K. J., Holcomb, P. J., & Emmorey, K., (2018a). ERP effects of

masked neighbour priming in deaf readers. Manuscript under review.

Meade, G., Grainger, J., Midgley, K. J., Emmorey, K., & Holcomb, P. J. (2018b). From

sublexical facilitation to lexical competition: ERP effects of masked neighbor priming.

Brain Research, 1685, 29-41.

The neurobiology of reading 34

Mehravari, A., Emmorey, K., Prat, C., Klarman, L., & Osterhout, L. (2017). Brain-based

individual difference measures of reading skill in deaf and hearing adults.

Neuropsychologia, 101, 153-168.

Moreno, A., Limousin, F., Dehaene, S., & Pallier, C. (2018). Brain correlates of constituent

structure in sign language comprehension. NeuroImage, 167, 151-161.

Neville, H. J., Mills, D. L., & Lawson, D. S. (1992). Fractionating language: Different neural

subsystems with different sensitive periods. , 2(3), 244-258.

Osmond, S., Winsler, K., Meade, G., Holcomb, P.J., Midgley, K.J., & Emmorey, K. (2018).

Frequency, orthographic neighborhood, and concreteness effects in deaf readers of

English: an ERP study. Poster presented at the Society for the Neurobiology of Language,

August, Québec City, Canada.

Rossion, B., Joyce, C.A., Cottrell, G.W., Tarr, M.J. (2003). Early lateralization and orientation

tuning for face, word, and object processing in the visual cortex. NeuroImage, 20, 1609–

1624.

Sacchi, E. & Laszlo, S. (2016). An event-related potential study of the relationship between

N170 lateralization and phonological awareness in developing readers.

Neuropsychologia, 91, 415-425.

Shaywitz, S. E., Shaywitz, B. A., Fulbright, R. K., Skudlarski, P., Mencl, W. E., Constable, R.

T., ... & Lyon, G. R. (2003). Neural systems for compensation and persistence: young

adult outcome of childhood reading disability. Biological psychiatry, 54(1), 25-33.

Skotara, N., Salden, U., Kügow, M., Hänel-Faulhaber, B., & Röder, B. (2012). The influence of

language deprivation in early childhood on L2 processing: An ERP comparison of deaf

The neurobiology of reading 35

native signers and deaf signers with a delayed language acquisition. BMC

neuroscience, 13(1), 44.

Skotara, N., Kügow, M., Salden, U., Hänel-Faulhaber, B., & Röder, B. (2011). ERP correlates of

intramodal and crossmodal L2 acquisition. BMC neuroscience, 12(1), 48.

Twomey, T., Waters, D., Price, C. J., Evans, S., & MacSweeney, M. (2017). How auditory

experience differentially influences the function of left and right superior temporal

cortices. Journal of Neuroscience, 0846-17.

Vergara-Martínez, M., Gómez, P., Jiménez, M., & Perea, M. (2015). Lexical enhancement

during prime–target integration: ERP evidence from matched-case identity

priming. Cognitive, Affective, & Behavioral Neuroscience, 15(2), 492-504.

Weber-Fox, C., & Neville, H. J. (2001). Sensitive periods differentiate processing of open-and

closed-class words: An ERP study of bilinguals. Journal of Speech, Language, and

Hearing Research, 44(6), 1338-1353.

Wang, X., Caramazza, A., Peele, M. V., Han, Z., & Bi, Y. (2015). Reading without speech

sounds: VWFA and its connectivity in the congenitally deaf. Cerebral Cortex, 25(9),

2416-2426.

Waters, D., Campbell, R., Capek, C. M., Woll, B., David, A. S., McGuire, P. K., ... &

MacSweeney, M. (2007). Fingerspelling, signed language, text and picture processing in

deaf native signers: The role of the mid-fusiform gyrus. Neuroimage, 35(3), 1287-1302.