Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2009 Non-Mainstream and First Grade Children's Language and Reading Skills Growth Catherine Ross Conlin

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY

COLLEGE OF COMMUNICATION

NON-MAINSTREAM AMERICAN ENGLISH AND FIRST GRADE

CHILDREN’S LANGUAGE AND READING SKILLS GROWTH

By

CATHERINE ROSS CONLIN

A Dissertation submitted to the Department of Communication Disorders in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Summer Semester, 2009

Copyright © 2009 Catherine Anne Conlin All Rights Reserve

The members of the committee approve the dissertation of Catherine Ross Conlin defended on May 26, 2009.

______Howard Goldstein Professor Co-Directing Dissertation

______Carol Connor Professor Co-Directing Dissertation

______Stephanie Al Otaiba Outside Committee Member

______Shurita Thomas-Tate Committee Member

Approved:

______Juliann Woods, Chair, Department of Communication Disorders

______Gary Heald, Dean, College of Communication

The Graduate School has verified and approved the above-named committee members.

ii

To my husband and son, all my love forever.

iii

ACKNOWLEDGEMENTS

The accomplishment of this dissertation and the degree it represents have been a dream of mine for twenty five years. It would not have been possible without the tremendous support and guidance of many people. First and foremost, I owe immeasurable thanks to Carol Connor for taking me on as her doctoral student in the third year of my program. Her professionalism, compassion and genuine care and concern for me as a student were unwavering. Her excellence as a researcher, educator, and mentor guided me through the challenge of completing this dissertation. She has provided me with a model of a professional scholar that I will take with me into my career and beyond. I am grateful to Howard Goldstein for his commitment to me and my program, for co-chairing my committee from afar, and offering consistent guidance and support. I am also grateful to Shurita Thomas-Tate for her expertise and thoughtful suggestions as a member of my committee, as well as numerous talks about handling the demands of graduate life. I am grateful to Stephanie Al Otaiba for her guidance and mentoring through my first teaching experience, as well as her service on my committee. While the demands of a doctoral program are many, this challenge would have been greater without the support of friends I have made along the way. In particular, I want to extend warmest gratitude to the following individuals: “The Girls” Liz Crawford, Elissa Arndt, and Kathleen Pierce, thank you for all the practice sessions, pep talks and encouraging emails, and your constant faith during the simultaneous pursuit of our dreams. I wish for each of you a career filled with happiness and satisfaction. To Laurie Swineford, thank you for your friendship and work ethic. You are a remarkable example of persevering though the challenge may be great. To Phyllis Underwood, thank you for your encouragement, and help with information and details about the ISI project. Jennifer Lucas, thank you for your help with data and details of the ISI project. To Sibel Kaya, thank you for your statistical knowledge, willingness to help, and sincere encouragement. To Stephanie Glasney and Monet Travis, thank you for your guidance with coding. For my husband Mark, your constant love and support made it possible for me to accomplish this goal. I love you forever. There are not enough words to express all that you have done to make this dream a reality for us. My love and gratitude will last a lifetime.

iv

For my son Lucas, may it always be remembered that “Lucas help mommy writing dissertation” is proof in your five-year-old print that you did. Your very existence spurred my motivation to accomplish this degree. I wish for you a life of happiness and learning. I love you forever. To my parents for your love, support and encouragement throughout every goal I have ever pursued, but especially for valuing education and teaching me the importance of perseverance. Thank you. I love you.

v

TABLE OF CONTENTS

List of Tables ...... vii

List of Figures ...... ix

Abstract ...... x

1. INTRODUCTION ...... 1

2. METHODS ...... 26

3. RESULTS ...... 34

4. DISCUSSION ...... 57

APPENDICES ...... 70

REFERENCES ...... 76

BIOGRAPHICAL SKETCH ...... 84

vi

LIST OF TABLES

Table 1. Ten prominent features of NMAE including: African American English (AAE) and Southern American English (SoAE) ...... 10

Table 2. School-wide demographics ...... 28

Table 3. Descriptive statistics for sample (n=694) by gender and ethnicity ...... 29

Table 4. Table 4: Total dialect use by First grade children on the Diagnostic Evaluation of Language Variation Screening Test (DELV-S) ...... 34

Table 5. Percentage of use of DELV-S Phonological Items by ethnicity from fall to spring...... 36

Table 6. Percentage of Dialect Variation within ethnic groups ...... 37

Table 7. Percentage of Dialect use by ethnicity and gender ...... 37

Table 8. Percentage of phonological feature use in the fall, by gender ...... 39

Table 9. Percentage of use of DELV-S Morphological Items by ethnicity from fall to spring...... 40

Table 10. Percentage of morphological feature use in the fall, by gender ...... 41

Table 11. HLM Descriptive statistics for modeling change in DVAR from fall to spring ...... 43

Table 12. Three level model of change in DVAR showing estimation of fixed effects (with robust standard errors ...... 45

Table 13. Intraclass Correlations (ICC) and Descriptive Statistics for language and literacy Outcomes ...... 46

Table 14. Two level model of AK showing estimation of fixed effects (with robust standard errors) The outcome variable is Spring Academic Knowledge (AK) w score ...... 48

Table 15. Two level model of PV showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring Picture Vocabulary (PV) w score ...... 49

Table 16. Spring PV and school SES outcomes with morphological feature use (with robust standard errors). The outcome variable is Spring Picture Vocabulary (PC) w score ...... 50

vii

Table 17. Two level model of Risk for Language Disorder (DELV-S Part II scores) showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring DELV-S Part II Score ...... 51

Table 18. Two level model of LW showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring Letter Word Identification (LW) w score ...... 51

Table 19. Spring LW outcome with morphological feature use (with robust standard errors). The outcome variable is Spring Letter Word Identification (LW) w score ...... 52

Table 20. Two level model of PC showing estimation of fixed effects (with robust standard errors) ...... 53

Table 21. Spring PC outcome with morphological feature use (with robust standard errors) ...... 54

Table 22. Descriptive statistics for two level model of treatment and DVAR ...... 55

Table 23. ISI treatment and DVAR interaction (with robust standard errors). The outcome is Spring Letter Word Identification (LW) w score ...... 56

viii

LIST OF FIGURES

Figure 1. Frequency of NMAE use by ethnicity ...... 35

Figure 2. Frequency of NMAE use by gender and ethnicity ...... 39

Figure 3. Use of features by ethnicity ...... 42

Figure 4. Change in dialect use from fall to spring ...... 44

Figure 5. Rate of change by school context and ethnicity ...... 45

ix

ABSTRACT

The evidence of a general achievement gap, and more specifically, a reading gap between African American students and White students is a well documented and alarming phenomenon (Chatterji, 2006; Darling-Hammond, 2004, 2007; Darling-Hammond, Holtzman, Gatlin & Heilig, 2005; Fishback & Baskin, 1991; Jencks & Phillips, 1998; Haycock, 2001; Ladson-Billings, 2006; Lindo, 2006). Factors such as equal access to the high quality schools, negative teacher attitudes and test bias and other possible sources of inequality have been suspected as sources of the cause of the achievement gap (Darling-Hammond, 2004; Goodman & Buck, 1973). Research is equivocal on which factors most contribute to the difficulty many African American children have matching the performance levels of their White peers. However, new theories on children’s use of non-mainstream American English as it relates to their achievement are emerging. Therefore, the purpose of this study is to investigate the language and literacy skills of students who use non-mainstream American English, to better understand the mechanisms influencing achievement and to investigate possible causes that may be contributing to the continuation of this disparity. Hierarchical linear modeling and descriptive statistics were used to analyze the data. Three main findings emerged: (1) the results on Part 1 of the DELV-S, which identifies variation from MAE generally follow patterns observed in the extant literature (e.g., African American children are more likely to use a dialect that varies from MAE than are White children); (2) Children who speak NMAE in First grade generally use fewer of the phonological and morphosyntactic features of their dialect at the end of First grade than they did in the fall; and (3) Children who use fewer NMAE features (or more MAE features) in the fall of first grade tend to show greater literacy skill gains than do children who use more NMAE features. The results of this study help build convergence toward understanding the relationship between children’s use of NMAE and their language and literacy development and achievement. Specifically, this study adds to the growing literature base that supports children’s linguistic flexibility as the most likely theory elucidating the complexity of language and reading development. In addition, if offers possible reasons for the difficulty encountered by NMAE speaking children learning to read.

x

CHAPTER 1

INTRODUCTION

Much has been written about the language and literacy skills of African American children and the long-standing disparity in test scores and school success, which characterizes their performance in core academic subjects relative to their White peers. Often called the achievement gap (Chatterji, 2006; Darling-Hammond, 2004, 2007; Darling-Hammond, Holtzman, Gatlin & Heilig, 2005; Fishback & Baskin, 1991; Haycock, 2001; Jencks & Phillips, 1998; Ladson-Billings, 2006; Lindo, 2006), this persistent and perplexing difference in academic success has been explained by a number of reasons, which are discussed more fully below, and have been conjectured and tested. Among these are students’ socioeconomic status (SES) because minority students are much more likely to live in poverty than their majority peers (McLoyd, 1998; Neuman, 2008), inequities in educational opportunities associated with under-funded and under-resourced schools (Darling-Hammond, 2004) and children’s use of dialects that do not match the mainstream American English (MAE) used in most schools in the US (Craig, Connor & Washington, 2003; Terry, Connor, Thomas-Tate, Love, in press; Washington, 2001). The purpose of the proposed study is to examine first grade children’s use of dialects that differ from MAE (non- standard MAE, NMAE) as it relates to their language and reading development in the context of school SES and instruction. The Achievement Gap Whereas some researchers make a distinction between the achievement gap in general and the literacy gap in particular (Jeynes, 2007), the difference in performance on academic measures, whether broad or specific to literacy, results in the same outcome of underachievement and diminished success for African American students compared to White students. Thus, the term achievement gap can be broadly used to encompass a disparity in literacy skills specifically, or school performance more generally. Some date the research on the achievement gap back to the 1950s when the Supreme Court case of Brown vs. Board of Education, challenged the legalized segregation of public schools and placed the issue of race and class in education prominently in the public arena (Darling-Hammond, 2007; Ladson-Billings, 2006). Social scientists committed to equality in education and concerned about the historical disparity in achievement among poor African American children attending mostly urban

1 schools, and more affluent White children attending suburban classrooms, began their investigations in earnest to find the cause of this inequality. Others place the origin of the controversy much farther back, contending that African Americans and Native Indians were systematically excluded from education by the White settlers who came and colonized the territory of what would become the United States (Darling-Hammond, 2004; Fishback & Baskin, 1991; Ladson-Billings, 2006). From that time until just after World War II, the education of minority students was considered unimportant, and even a threat to the economic stability of society (Darling-Hammond, 2004; Ladson-Billings, 2006). In fact, several researchers argue that the achievement gap is an unavoidable consequence of a more wide-spread problem than a persistent difference between scores. Darling-Hammond (2004, 2007), and Darling-Hammond et al. (2005) has observed that well-trained and experienced teachers, evidence-based curriculums, small class-size, up- to-date textbooks and sufficient supplies, have been associated with higher student achievement levels. She has also corroborated the findings of other researchers who have noted that these higher quality schools are more often found in suburban districts where mostly white students attend school, than in urban settings dominated by minority students (Connor & Craig, 2006; Craig & Washington, 2006; Haycock, 2001; Zephir, 1999). This disproportionate distribution of wealth is a symptom of what Ladson-Billings (2006) refers to as the education debt, a much broader problem encompassing historical, economic, sociopolitical and moral components, that combined produces the deeply entrenched inequality we see today between African American and White students. She claims that we must continue research to address this pattern of general inequality because such patterns perpetuate the achievement gap and because it is the moral, socially just course of action. Even though the democratic principles the nation were founded upon prevailed, and minorities were granted educational access, their performance in academic achievement was not commensurate with the majority (Jencks & Phillips, 1998). With national student testing beginning in the late 1960s, and excepting a few sporadic periods of progress in the 1970s and 80s (Jencks & Phillips, 1998; Lee, 2002), it has been observed that non-Asian minorities continue to perform below their majority White peers in subjects such as reading, math, science, social studies, writing and geography when measured on the National Assessment of Educational Progress (NAEP) standardized tests (Darling-Hammond, 2004, Darling-Hammond et al., 2005; Haycock, 2001; Jencks & Phillips, 1998; Lee, 2002).

2

Statement of Problem In the most recent assessment of national, state and local achievement, the Nation’s Report Card indicates that all students scored higher in 2007 than in the previous years, however, these improvements did not result in closing the achievement gap between African American and White students (NAEP, 2007). Furthermore, this rise in overall scores masks the reality of a continued trend of minority students demonstrating weaker performance in all areas compared to White students. Not only did the improvement result in unequal performance between African Americans and Whites in general academic subjects, but in the area of reading in particular, African Americans and Whites scored dramatically differently, with Whites outperforming African Americans by 13%. This represents a 27 point gap in scores, or the difference of reading at Basic level versus Below Basic level (NAEP, 2007). This is an important distinction because when considering the ability to function in school, compete for jobs, and ultimately contribute to society, proficient language and reading abilities are fundamentals skills required for virtually all academic areas. The Basic achievement level on NAEP standardized tests represents only partial mastery of the subject. Therefore, whereas gains have been made on the NAEP scores for African Americans, this group of students is still not achieving even partial mastery of reading skills. The evidence of a general achievement gap, and more specifically, a reading gap between African American and White students is not debated. Questions of equal access to the high quality schools, however, and other possible sources of the achievement gap, remain. Certainly, poverty is one factor that is known to correlate negatively with poor academic performance (Neuman, 2008). Children who qualify for free and reduced price lunch in school, a measure for family income level, have not made gains in core subjects since 2005. By contrast, their peers who do not qualify because of higher economic status, demonstrated higher scores overall (NAEP, 2007). Neuman (2008) reported that “poor children, on average, do not perform well in school…the average cognitive scores of children at age 4, in the lowest socioeconomic groups were 60% below those of children from middle- and upper middle- income families” (pp.1). Significantly, African American children are disproportionately represented in the low SES strata (Craig, 2008; U.S. Census Bureau, 2006) thus the risk of negative influence from poverty is potentially compounded for this population who may have additional issues of dialect or race impeding them. Nevertheless, research is equivocal on which factors most contribute to the difficulty many African American students have matching the performance levels of their White peers. However, new

3

theories on how the use of non-mainstream American English might influence achievement and contribute to the achievement gap are promising. Therefore, the purpose of this study is to investigate the language and literacy skills of students who use non-mainstream American English, in order to better understand the mechanisms influencing achievement and to contribute to the literature on possible causes that may be contributing to the continuation of this disparity. Factors Contributing to the Achievement Gap The literature converges around several suspected causes of the achievement gap: negative teacher attitudes toward African American students, particularly those who speak dialects of (Goodman & Buck, 1973; Labov, 1995; Strickland, 2001; Washington & Craig, 2001, to name a few), family social economic status and the associated disadvantages it brings (Charity, Scarborough, & Griffin, 2004; Craig & Washington, 2004b; Terry et al., in press;), poorer quality schools for African American students or students from lower SES backgrounds (Darling-Hammond, 2007; Darling- Hammond et al., 2005; Haycock, 2001; Strickland, 2001) and test bias (Charity et al., 2004; Cutting & Scarborough, 2006; Washington, 2001;Willis, 2008). Negative Attitudes toward Non-Mainstream American English (NMAE). Several researchers have made the case that African Americans who speak a variation of mainstream or general school English, perform worse on measures of reading than do their general English speaking peers (Cross, DeVaney & Jones, 2001; Goodman & Buck, 1973; Labov, 1995). The reason posited for this difference, these researchers claim, is not so much the structural and linguistic features of the dialect itself, but rather that fact that it is different from the mainstream American English (MAE) spoken in school. In other words, there is an inherent bias against the dialect, and teachers encountering it are more likely to think negatively of the cognitive and linguistic abilities of its speakers than they are of students speaking MAE. Goodman and Buck’s (1973) classic paper discussed this phenomenon wherein they rebuked the notion that any difficulty learning to read or performing on measures of reading was due to contrasts between MAE and the dialect of African American students, African American English, which is one type of NMAE used by many students and adults who are African American. Instead Goodman and Buck countered that the cause was simply and straightforwardly “linguistic discrimination” (pp.455). Around the same time, Labov (1969) supported this hypothesis with his own observations that children speaking variations of NMAE would be subjected to misunderstanding and misjudgments by teachers who perceived their cultural dialect as incorrect usage of MAE. He called this a “cultural conflict between the vernacular culture and the classroom” (pp.43). The posited result of

4 this ignorance on the part of the teachers were negative perceptions about the students speaking dialect and consequently, decreased expectations and negative judgments about their performance. Cross et al. (2001) conducted a study examining the attitudes of pre-service teachers toward the intelligence, personal character, education level and social status of five adult readers whom were either NMAE users, specifically Southern English or African American Vernacular English (AAVE) speakers, and one MAE speaker, who’s dialect is referred to as “Network or broadcast” dialect in this study. The results indicated that white pre-service teachers rated the white speakers of Southern English higher on perceptions of intelligence, personal character, education level and social status, than they did African American speakers of AAVE. By contrast, the African American raters judged the AAVE speakers higher than the speakers of Southern English. The researchers concluded that dialect and race play a role in individual perceptions of personal characteristics. The not surprising, but nonetheless distressing additional finding from this study, is the continued preference found for Network dialect (i.e. MAE). Regardless of race, all the listeners rated the intelligence, personal character and social status/ambition higher for the single White speaker of MAE. It is important to extrapolate this last finding and expand on its implications further. What these authors refrained from imputing with these findings is that race may play a larger role than simply favoring the speech of one’s own ethnic or racial group. It may be that MAE is favored by both groups in this study because as Wolfram and Schilling-Estes (2006) have pointed out in their research, individuals whether consciously or not, elevate the language of the perceived dominant group simply because it is dominant. Thus, AAE may be socially stigmatized because it is the language of a historically, socially subordinated group in the United States. Therefore, it becomes increasingly difficult to view findings like those in the Cross et al. (2001) paper with anything less than racial bias in the face of other research that shows AAE and southern American English (SoAE) as well as MAE share many features between them. Southern American English . While distinctions have been made in the literature between SoAE, also called Southern White English, Southern African American English (SAAE) (Oetting & McDonald, 2001,2002) and AAE, recent research is converging around SAAE and AAE being more or less the same dialect with few subtle differences between them (Oetting & Pruitt, 2005; Wolfram & Schilling-Estes, 2006). Similarly, AAE and SoAE have been recognized as closely related with many shared features including the reduction of the diphthong /ai/ to a monophthong (“coil” becomes “coal”), short vowel /e/ and /i/ merge before nasals (“fence” becomes “fince” ), vowel shifts where words like

5

“sit” sound like “seat” and metathesis of fricatives with stops consonants such that “ask” becomes “aks” (Anderson, 2002; Charity, 2008; Oetting & McDonald, 2001, 2002). Additionally, Oetting and Newkirk (2008) found that there is no difference in the use of subject relative clauses (introduced by use of wh- pronouns who, whom, whose and that) by speakers of AAE or SoAE. So similar are the spoken aspects of these dialects that when one study identified 17 patterns of AAE, another found 11 of the same patterns in a sample of SoAE speakers (Oetting & McDonald, 2002). What is important with regard to research samples that include children who speak AAE, SoAE and MAE, such as the current study, is that despite the overlap among these dialects, it is the non- overlapping or contrastive (Seymour, Roeper & de Villiers, 2003) features that render the dialect socially unfavorable and may reflect a level of racial discrimination rather than linguistic interference when a difference in performance is found between groups of speakers (Terry et al., in press; Wolfram & Schilling-Estes, 2006). For AAE, the distinctive forms of multiple negatives (“he didn’t do nothing”) as well as the use of regular past tense verbs in place of irregular (“he knowed it”) or the use of “ain’t” instead of “no” or “not” carry greater stigma than do the SoAE forms such as metathesis of the final /s/ in words like ask (becomes “aks”) (Terry et al., in press; Wolfram & Schilling-Estes, 2006). In addition to these examples, the predominant difference noted between AAE and SoAE is in the density of the dialect usage with AAE exhibiting much higher frequency of the same features than SoAE (Oetting & McDonald, 2001; Wolfram & Schilling-Estes, 2006; Wright, 2003). For example, the omission of the /s/ marker in third person singular subjects (“she go” for “she goes” or “he jump” for “he jumps”) is present in both AAE and SoAE, but for AAE it is present a majority of the time (some say 85% or more) while in SoAE the omission is less common (around 4% of the time) (Oetting & McDonald, 2001; Wolfram & Schilling-Estes, 2006). Similarly, the absence of the verb “be” in contractions (e.g. “you mad at me”) is present in both AAE and SoAE varieties, but with much greater rate of use in AAE (Wolfram & Schilling-Estes, 2006, pp. 215). So, although there are apparent and documented similarities between AAE and SoAE vernaculars, the greater social impunity applied to AAE over SoAE is undeniable. And the only remaining explanation for the partiality displayed toward SoAE which is predominantly spoken by White children, over the AAE which is predominantly spoken by African American children, would be racial prejudice. Further testament to the possible bias and negative influence of speaking AAE or other non- standard varieties of MAE exists in a recent meta-analysis by Tanenbaum and Ruck (2007). These

6

authors verified that teachers’ negative attitudes toward African American students appear to persist. They examined referral rates for special education or other “negative assignments”, bias toward weaker performance expectations, and behavior differences by teachers. Teachers were found to systematically display higher expectations and use more positive speech (praise or encouragement) for European American students over African American and other minority students, as well as more referrals for disciplinary action and remedial services. Because of the reality of possible bias and racial judgment being applied to children who speak dialects of MAE, it is critical that further research explore the contexts in which these factors come together, the learning environment of the language arts classroom, in an effort to uncover and eventually eradicate these negative influences on academic achievement. Socioeconomic Status (SES). Some have argued that lower income level is an indirect contributor to the academic achievement of students because of the related factors associated with family economics such as limited access to literacy materials, caregiver education level, poor quality healthcare, and numerous absences from school (Snow, Porche, Tabors, & Harris, 2007). But the evidence on this issue is far from definitive. Some recent research (Craig, 2008; Craig & Washington, 2004b; Lee, 2002) seems to indicate that the relative income of African Americans compared to Whites does not seem to be an adequate explanation for the lower literacy scores of African American students. Lee (2002) demonstrated that the achievement gap, defined as the percentage of students attaining a high school education, narrowed in the 1980s despite a corresponding widening of a socioeconomic gap between the two groups in the late 1980s. Furthermore, when the high school graduation gap widened again in the 1990s, income levels between African Americans and Whites were closer than in previous decades. Craig (2008) investigated the role of low socioeconomic status on the academic achievement of African American students and concluded that while poverty plays a role in the poorer performance of more African American students than White students, it is not a sufficient explanation for the Black- White gap in achievement. Connor, Son, Hindman and Morrison (2005) also found that children from low SES homes were more likely to come from poor quality preschools, have teachers who were less warm and responsive, and lower quality classroom environments. Importantly, however, SES was not related to time spent in academic subjects which was equal to higher SES students. Meanwhile, Sirin (2005) recently completed a meta-analysis to update the findings of earlier reviews, in which he reflected the disagreement in the literature on the role SES plays in academic achievement. In the background to his study, he cited previous research, some attesting to the strong association between income and achievement and others finding no significant relation at all (see for

7 example Seyfried, 1998 and Sutton & Soderstrom, 1999 for contrasting views). His findings revealed a moderate association between SES and school achievement at the student level and a stronger relationship at the school level, but the practical effects of this relationship were moderated by several factors that ultimately tempered the impact of the results. Individual characteristics such as gender, race and dialect use, along with methodological aspects such as the size of the analyzed unit (aggregated versus student level data), inconsistent definition of SES, and varying student measures of achievement all changed the size of the effect. While SES seems to contribute to students’ academic achievement, the means by which it does so are not clear from the existing research. Reinforcing the complex picture of family income and its relation to children’s school success is the work by Horton-Ikard and Miller (2004) and Horton-Ikard and Ellis Weismer (2007), who have focused on the language and learning skills of African American children from middle SES homes. Their research has shown that African American children who speak AAE are not exclusively from low SES environments. In their 2007 study, Horton-Ikard and Ellis Weismer observed that middle SES African American children, particularly boys, had dense dialect patterns in their speech during narrative storytelling, and these children performed better than their low SES African American peers on a receptive vocabulary measure, but with a maximum score at the 57th percentile compared to the norm. These findings seem to call into question a clear relation between low SES and the achievement level of African American students, suggesting perhaps that dialect may be a more powerful predictor than poverty. Overall, this research seems to suggest the need to examine factors beyond economic level in order to better understand the academic capability of African American children whether they are raised in poverty or relative affluence. Finally, even though Craig and Washington, (2004a) conceded that African American children are more than three times as likely to live in poverty, they argue that using economic status to explain the academic performance of African American students is problematic because SES itself is a complex and dynamic construct that is difficult to capture. Furthermore, they point out that while SES can help explain performance difference upon school entry, they do not account for the continuing discrepancy in scores after students are enrolled for a time. Research indicates consistently that the achievement gap grows wider the longer African American children are in school (Entwisle & Alexander, 1993). SES, therefore, seems unreliable as an explanation and warrants investigation of other sources of contribution. Non-Mainstream American English (NMAE). One potentially important area of investigation, that has also been a suspected contributor to the low academic performance of African

8

American students, is the NMAE dialect spoken by most African American children (Craig & Washington, 2002, 2006). Known variously as African American Vernacular English, or what has sometimes been called Black English, or Ebonics (Charity et al., 2004; Craig, 2008; Fogel & Ehri, 2006; Labov, 1995; Terry, et al., in press; Washington & Craig, 2001), the term African American English (henceforth AAE) is generally accepted as the most encompassing term (Charity, 2008) and will be used to refer to the NMAE dialect spoken by some of the first grade children in the present study. AAE was heavily investigated in the 1970s and 80s after a court ruled in the “Ann Arbor” decision that their use of AAE constituted a disadvantage for African American students for which schools were required to address (Craig & Washington, 2006; Thompson, Craig & Washington, 2004). The research of this time was based on the adult forms of AAE and consequently produced inconsistencies in the findings, leaving it unclear as to whether an association between AAE dialect and school performance existed (Terry, 2006). A renewed interest in the possible association between AAE and literacy skills for African American children has recently developed since researchers have been able to more completely identify the features of this dialect in order to study its effects in school-age children (Connor & Craig, 2006; Washington & Craig, 2002). Known characteristics that have been consistently observed by many researchers include the following: AAE is spoken more by boys than girls (Craig, Thompson, Washington & Potter, 2003), more in younger children with decreasing amounts as children advance in age and school experience, is produced more by children in lower SES environments than affluent (Craig, Connor et al., 2003) and varies by linguistic context (i.e. AAE is produced more in natural discourse-like contexts than highly structured academic tasks) (Charity et al. 2004; Craig & Washington, 2004a). Furthermore, specific details of the phonological, lexical and morphosyntactic aspects of AAE have been described in studies by Green (2002a) and others (Connor & Craig, 2006; Craig, Thompson, et al., 2003; Washington & Craig, 2002). For example, Craig, Thompson, et al. (2003) listed nine phonological (e.g. “g dropping”, “substitutions for /θ/ and /ð/) and 24 morphosyntactic (e.g. “ain’t used as a negative auxiliary in have+not, do+not, are+not, and is+not constructions”, “completive done”) features used in AAE. In order to represent the complexity and sophistication of the dialect beyond what usual “feature lists” (such as above) illustrate, Green (2002a) discussed how the particular features of the dialect overlap with MAE but the application of the features is what sets it apart as AAE. For example, in MAE the consonant cluster ft at the end of gift is always produced, yet in AAE it is optional.

9

Table 1: Ten prominent features of NMAE including: African American English (AAE) and Southern American English (SoAE). Feature Dialect Researcher Research findings Example of feature *Final consonant AAE Craig, Washington & 37% of sample “I see a gif near cluster reduction Potter (2003) (n=64) the baby” *Substitution of AAE Craig, Washington & 45% of sample “I see a fish /th/ Potter (2003) (n=64) breave” *Subject Verb AAE Washington & Craig, Use in 86-100% of she go” for “she Agreement, also SoAE (2002) samples (n=28) goes” or “he known as Oetting & Pruitt, (2005) 100% of 4-6 yr.old jump” for “he omission of third sample (n=24) jumps” person plural *Use of have/got Oetting & Pruitt, (2005); 38% of 4-6 yr.old “the girl have a sample (n=24) little kite” *Use of don’t/do Oetting & Pruitt, (2005); 71% of 4-6 yr.old “he don’t like it” not sample (n=24) Zero Copula AAE Oetting & Pruitt, (2005); 100% of 4-6 yr.old “you mad at me” sample (n=24) Washington & Craig, Use in 86-100% of (2002) samples (n=28) Multiple SoAE Oetting & Pruitt, (2005); 79% of 4-6 yr.old “he didn’t do negatives AAE sample (n=24) nothing” *Regularized SoAE Oetting & Pruitt, (2005); 92% of 4-6 yr.old “when we was at past tense AAE sample (n=24) the store” was/were Omission of past SoAE Oetting & McDonald 36% of 6 yr. old “I dress them tense marker AAE (2001) sample (n=31) before” Oetting & Pruitt, (2005); 58% of 4-6 yr.old sample (n=24) Habitual be AAE Oetting & Pruitt (2005) 67% of 4-6 yr. old “It be cold SoAE sample (n=24) outside” Note. *indicates features that are targeted on the Diagnostic Evaluation of Language Variation Screening Test.

It is important to note, however, that while these features are often described as if they are a corpus of required traits of the vernacular, usage varies by speaker and context as well as along the different parameters of language (, , syntax, semantics, and pragmatics). That is, not all speakers are obligated to use all features, nor do they have to use the features present in their speech all the time (Wolfram & Schilling-Estes, 2006). In fact, recent research is beginning to suggest that more sophisticated users of a dialect choose when, and with whom they will display their linguistic

10 preference (Connor & Craig, 2006; Craig & Washington, 2004b; Horton-Ikard & Miller, 2004; Terry et al., in press). Beyond this, other research has shown that most features are present in heavy dialect users’ speech suggesting that these speakers make the inclusion of dialectal patterns obligatory (Craig & Washington, 1994). Findings such as these could have implications for the current research study where students dialect usage is being measured in a structured testing situation removed from the naturalistic context of informal conversation. In such settings, one would never know if a speaker was adjusting his/her language to meet the expectations of an examiner, for example, eliciting a response to a question. Thus, it might be impossible for a measure such as the DELV-S to accurately capture a phenomenon such as code switching, when a speaker uses features of one dialect over another as appropriate for the pragmatic environment (a more complete explanation is found below). Therefore, formal measures might be limiting as a means of documenting dialect variability. Nevertheless, research on NMAE has added to our knowledge of AAE as a language variation of MAE, but more study could be done to better understand the social and developmental impact of the dialect. Furthermore, the interrelated aspects of AAE and MAE could explain why it has been suspected as a cause of academic failure. The present study aims to inform this body of literature by adding to and possibly clarifying this variability of NMAE dialect usage by both African American and White students, as assessed by the DELV-S, at the first grade level. NMAE and Test Bias. The ability of African American students to perform well on academic measures designed and standardized on the majority population has been an ongoing concern of many researchers (Charity et al., 2004; Craig, 2008; Cutting & Scarborough, 2006; RAND, 2002; Seymour et al., 2003; Thomas-Tate, Washington, Craig, & Packard, 2006; Thomas-Tate, Washington & Edwards, 2004; Willis, 2008). Tests are designed in the language of the dominant culture, (Oller, Kim & Choe, 2000) or school English as it has also been called (Charity et al., 2004). Furthermore, the content of the tests, many of the questions, pictures that aid in comprehension or prompting are taken from the mainstream culture and context (Seymour et al., 2003). Students whose dialect and cultural upbringing varies from this are at a disadvantage when taking these tests (Champion, Hyter, McCabe, & Bland-Stewart, 2003; Craig & Washington, 2006; Seymour et al., 2003; Thompson et al., 2004; Washington & Craig, 1992, 2004; Willis, 2008). Willis’ (2008) critical report on the state of reading comprehension assessment in this country detailed the history of test design and skewed outcomes achieved by African American students compared to their white peers. Thomas-Tate et al. (2004) reinforced that appraisal with a study demonstrating the inappropriateness of a popular

11 phonological processing test for use with African American students. Furthermore, Washington and Craig (1992) and Craig, (2008) have written about the inherent culture bias that exists in tests such as the Peabody Picture Vocabulary Test-Revised (PPVT-R, Dunn & Dunn, 1981). A general lack of satisfactory language measures for use with culturally and linguistically diverse (CLD) populations including and not limited to African American children, has prompted several researchers to design their own culturally fair language assessments or resort to the use and recommendation of the single available tool designed specifically for accurate and unbiased screening of dialect variation (see the following discussion on the Diagnostic Evaluation of Language Variation Screening Test, Seymour et al., 2003). Initially, Craig and Washington (2000) saw the need for instruments that reliably captured the unique phonological features found in African American English within the range of communication abilities displayed by children in normal discourse setting. They compiled a battery of informal, criterion-referenced measures that were designed to ensure dialect features would be recognized as the appropriate reflections of the language variant they represent, rather than a disorder of language or speech. Similarly, others who were investigating the word learning skills of children from diverse cultural, socioeconomic and language backgrounds developed an expressive assessment of word learning which also relied on a dynamic assessment in naturalistic environments (Brackenbury & Pye, 2005; Burton & Watkins, 2007). Finally, dissatisfied with the insufficient tools available for accurate and efficient assessment of children’s word learning, and semantic knowledge, Brackenbury and Pye (2005) recommended the use of the Diagnostic Evaluation of Language Variation-Screening Test (DELV-S; Seymour et al., 2003) as the only commercially available screening test for dialectally diverse populations. Concurrent with these attempts to adequately evaluate children’s use of dialect, researchers have also devised various methods for measuring the amount, or rate of specific features present in speech (also known as density, or frequency), as well as the type of linguistic forms exhibited. The literature reveals a preponderance of type-token count measures with a ratio calculated for percentage (Oetting & McDonald, 2001, 2002; Oetting & Newkirk, 2008; Oetting & Pruitt, 2005). Listener judgment methods have also been employed to confirm participants’ dialect type (Anderson, 2002; Oetting & Garrity, 2006; Oetting & McDonald, 2002; Oetting & Newkirk, 2008). Craig and Washington (2004) and Connor and Craig, (2006) have used Dialect Density Measures (DDM) to quantify the AAE tokens spoken in conversational discourse. This measure obtains a percentage of dialect used from a spontaneous language sample by dividing the total number of words in the sample, multiplied by 100.

12

Each of these researcher-devised methods are appropriate for use in naturalistic settings where monologues and dialogues are taking place with the participants. The more formal and structured setting of a test format warrants a different kind of measurement that accounts for the known items and predicted responses. The percentage of Dialect Variation (DVAR) was created by Terry et al. (in press) to accommodate data obtained from the DELV-S and is used in the current investigation. A more complete description of this measurement tool is found in the Method section of this study. The Diagnostic Evaluation of Language Variation-Screening Test (DELV-S). Responding to the need for a valid measure of dialect distinction, Seymour et al. (2003), created the only dialect sensitive measure available for use with culturally and linguistically diverse populations (Craig & Washington, 2006). The DELV-S is a screening test designed to be used with children who speak MAE and variations of MAE (NMAE in this study) such as AAE, Southern English (SoAE), Cajun and Creole. Its main purpose is to distinguish between language variations attributable to true linguistic dialect use, from those exhibited in children with disordered language skills, regardless of dialect. The test authors observed the well-documented trend of over-representing minority children, particularly those who speak NMAE on caseloads of special educators and speech-language pathologist when no language disorder was present (Oller et al., 2000; Seymour et al., 2003). This test was created as a preliminary antidote to this problem. The sensitivity of the measure is uniquely designed to avoid linguistic bias by identifying linguistic differences that are specific to NMAE and distinct from clinical language disorders. In designing the test, researchers based the item selection on language sample research from children who speak predominantly AAE, but also other varieties of NMAE including Appalachian English, Cajun, Hispanic English and Southern English. Pilot research then indicated which phonological and morphosyntactic features were contrastive, or best discriminated NMAE speaking children from MAE speaking children as well as those features that were shared, or non- contrastive and thus would be expected to be found in the language samples of typically developing children. In particular, the Language Variation Status, Part I of the DELV-S includes five phonological items found to consistently elicit NMAE responses from NMAE speakers. These items target the substitution of [f] for [th] (as in “teef” for teeth), the substitution of [v] or [d] for [th] (as in “smoov” for smooth and “breade” for breathe) and elimination of final consonant clusters also known as “zero- cluster element” (as in “gif” for gift) (Seymour et al., 2003, pp. 33). The 10 morphosyntactic items in Part I highlight four morphosyntactic features of NMAE that are contrastive from MAE. Specifically, the variable use of have/has (e.g. “the girl have/has a big kite”), omission of third person present tense

13 marker as in “the girl always sleep” (s omitted), use of don’t in place of doesn’t as in “this boy don’t like to swim”, and the use of a plural subject in the context of the copula “be” or its auxiliary “were” as in “they was dirty”. In addition, the screening test includes a non-word repetition task that relies upon memory processing rather than linguistic processing, thus limiting the bias against speakers of NMAE (Seymour et al., 2003). Part II of the DELV-S measures children’s risk for language disorder through the assessment of morphosyntactic knowledge and linguistic processing ability (via wh questions, use of verbs, and nonword repetition). Definitive diagnosis of language status would be determined through the use of the Norm Referenced or Criterion Referenced tests by the same authors (Retrieved May 28, 2009, from http://pearsonassess.com/HAIWEB/Cultures/en-us/Productdetail.htm?Pid=015-8092- 112&Mode=resourceSeymourtest). Furthermore, because test bias can also result from norming samples that are disproportionately representative of the dominant White culture, the DELV-S was standardized on a sample of predominantly NMAE speaking African American individuals (Seymour et al., 2003). Reliability is based upon consistency of clinical decisions obtained from Part I items because the scores on Part I of the DELV-S place children in clinically relevant categories rather than assigning a numerical score. Further, it is known that clinical decisions may be confounded by examiner bias, thus a study was conducted to determine the level of consistency between five pairs of examiners, one African American and one White examiner in each pair (Seymour et al., 2003). Twenty-five children whom were African American and speakers of NMAE, ranging in age from four to six years, were tested once by each examiner in a pair. 72% of the children were classified exactly the same way by both examiners, 20% of the children had different levels of language variation assigned to them, but all were recognized as speakers of NMAE dialect. From this evidence, the authors concluded that racial bias will have very little if any effect on the classification of the child’s language status (Seymour et al., 2003). Ironically, despite its uniqueness, none of the researchers mentioned above, who cite the DELV- S in their manuscripts and freely recommend its use, have reported their own findings from studies employing this measure. Thus, inadequate information exists on the DELV-S as a preliminary tool for dialect identification and language status determination. What’s more, it is acknowledged that as a screening measure it is limited in its capacity to identify the presence of a dialect on its own, nevertheless, the authors maintain that if used in conjunction with other language assessments, the DELV-S enables a clear determination of dialect status (Seymour et al., 2003, pp. 59). While the consistency data seem to support a moderate level of reliability, the inability to correctly identify 20% of

14 the children’s level of dialect density (whether a child is using some or strong variation from MAE) is problematic. As discussed, frequency of dialect use may be a critical factor influencing negative perceptions held by teachers and others judging speakers of NMAE dialects. Therefore, the inability to accurately pinpoint the level of language variation represents a significant and not dismissible amount of error on the part of the DELV-S to accurately categorize dialect use in a meaningful way for educational and research purposes. Furthermore, because the screening test authors are reluctant to market their test as a measure for identifying AAE, despite the use of an almost entirely African American sample of children upon which the test was piloted, it is all the more problematic that this screener cannot more accurately specify the density of language variation. Thus, further research is warranted to evaluate the DELV-S as a measure of dialect use. Prior to 2006, no study had been published on the strength of the DELV-S to capture NMAE use, or its ability to screen for language disorder and dialect. Spaulding, Plante, and Farinella (2006) were the first to publish a study investigating the accuracy of the DELV-S in screening for language impairment. They compiled data from 43 commercially available tests of varying language measures in order to determine “the magnitude of differences between language-impaired and matched, typically developing groups” (pp. 63). Examining the mean differences between scores among the various tests, they then calculated the difference in group performance. Their results showed that the DELV-S (among other tests) was marginally accurate in identifying language impairment. The DELV-S was within the group of tests that reported the average mean group difference of -1.34 thus identifying 43% of all children as language impaired within 1 standard deviation of normal (pp.66-67). In other words, the children who were categorized as language impaired were less than one full standard deviation away from normal, or children not categorized as language impaired. These results seem to call into question the ability of the DELV-S to distinguish children with language impairment from those without. But it is important to remember that the measure is a screener, and as such, is not meant to clearly identify language impairment but rather to provide initial information on language status in order to avoid needlessly testing children who are clearly typical language users. This study by Spaulding et al. (2006) used the manuals supplied with each test and the sample scores contained therein to infer and draw conclusions about accuracy of the measures being examined. Additionally, the DELV-S, a measure standardized on samples representative of CLD populations, was compared to other tests (e.g. the Test of Language Development-P, Newcomer & Hammill, 1997, and the Clinical Evaluation of Language Fundamentals-Preschool, Wiig, Secord, & Semel, 1992), that are

15

normed on samples representing the national demographic as a whole and are known to be biased against children who speak NMAE varieties (Gutierriez-Clellen & Simion-Cereijido (2007). Thus, as the single study to date, with clear methodological shortcomings, the investigation by Spaulding et al. (2006) offers paltry empirical evidence and requires further evaluation of the DELV-S. This present study is the first to actually use the DELV-S on a sample of children and examine its ability to initially capturing their dialect status. Even though steps can be taken to create valid tests or avoid psychometrically biased assessment measures, the issue of fairness in testing highlights the greater problem that linguistic dialects are largely ignored in mainstream educational settings and this could have ramifications beyond testing to success in school and life in general. Studying the performance of students who use NMAE could potentially give us information that would inform professional development or teacher training programs in education. Indeed, the DELV-S is one of the principle assessments investigated in the proposed study. NMAE and academic failure. Understanding that the NMAE dialects of AAE and SoAE share similar phonological, semantic, and morphosyntactic features with MAE reinforces the notion of similarity between them. It could be precisely this overlapping nature of AAE with MAE that causes problems with its use (Charity, 2008; Green, 2002a). In her descriptive study of AAE, Green noted that “owing to these similarities, it is easy to imagine that AAE speakers are unsuccessfully attempting to use mainstream English [MAE]” (pp. 688). Charity (2008) echoed these thoughts with her observation that “variation in the school lexicon can cause misunderstandings between students and teachers…AAE speaking students may be unsure about what the teacher means and thus compliance with his or her command may be delayed or not completed at all” (pp. 35). Researchers recognize the importance of understanding the role of NMAE dialect and academic achievement, although findings to date have been inconclusive (Terry et al., in press). Three theories appear in the literature to explain how dialect might interfere with school success. The first is known as the teacher bias theory. Though recognized as a valid, systematically rule- governed, and effective variant of MAE, (ASHA, 2003; Craig & Washington, 2006; Labov, 1995), some teachers view AAE as an incorrect form of MAE and consequently penalize AAE speakers for using their cultural language in discourse and writing. (Connor & Craig, 2006; Fogel & Ehri, 2006; Goodman & Buck, 1973; Labov, 1995). The second, the mismatch hypothesis, it is argued, explains the difficult acquisition of oral and written language skills that African American students experience in school (Labov, 1995). Noting the phonological, syntactic and morphological differences between NMAE and

16

MAE, researchers posit that speakers of NMAE are confronted with a dramatic incongruity in linguistic code when attempting to read textbooks or other materials written in MAE or when listening to instruction by teachers in the dominant discourse (Charity, 2008). Labov (1995) wrote a classic chapter on the ramifications of the mismatch between NMAE (specifically AAE) language features and that of MAE. So convinced was he of the impediment created by the dialectal difference, he advocated the use of primers written in AAE in a program called Bridge as a solution to the difficulty. More recently, Terry (2006) lends further support to the mismatch hypothesis. She investigated the spelling skills of first through third graders who spoke AAE compared to the spelling skills of students who spoke MAE. She found that MAE speakers consistently spelled better than AAE speakers and demonstrated understanding of inflected morphological forms better than AAE speakers. Terry concluded that the difficulty and type of errors recorded support the interference of dialect with learning and a general mismatch between NMAE and MAE forms. Despite these observations, others reject the mismatch theory as a fully satisfactory explanation for the influence of dialect on learning, citing instead, the newest evidence from dialect research as support for a potentially more plausible third hypothesis (Charity et al., 2004; Connor & Craig, 2006; Terry et al., in press). Labeled variously as dialect awareness (Charity et al., 2004), or dialect shifting (Connor & Craig, 2006; Craig & Washington, 2004a; Charity et al., 2004) this term refers to the ability to switch from the dialect of cultural upbringing to the dialect of the majority culture (MAE) (Thompson et al., 2004). Most recently, linguistic flexibility (Scarborough, Terry & Griffin, 2007; Terry et al., in press; Terry & Scarborough, in press) is the term that has been used to refine this concept further. Linguistic flexibility refers to the metalinguistic/pragmatic skill underlying the ability to dialect shift. Some researchers perceive an interrelationship between the linguistic sophistication and the requisite metalinguistic awareness that children who are low income and dense users of dialect must possess in order to alternate between dialects (Craig & Washington, 1994; J. Washington, personal correspondence, February 3, 2009). Thus, dialect shifting and linguistic flexibility are distinct, yet not mutually exclusive. Washington considers dialect shifting “an overt manifestation of linguistic flexibility” (personal correspondence February 3, 2009, e-mail). This study explores these concepts further in order to expand the evidence base for these theories. Dialect shifting/linguistic flexibility. A new hypothesis is supplementing these older theories for explaining the connection between dialect and literacy development. In the most recent literature, researchers suggest that students who speak AAE are acquiring the language of the classroom and using

17 less and less of their cultural dialect, sometimes called code switching or dialect shifting, in order to succeed academically (Connor & Craig, 2006; Craig & Washington, 2004a; Craig & Washington, 2006; Charity et al., 2004). Students who fail to adjust to the mainstream discourse of the classroom and the academic writing of textbooks and literature, may have difficulty learning to read (Craig & Washington, 2004b) or generally participating in the classroom (Craig & Washington, 2004a). There are few empirical studies however, examining this phenomenon of dialect shifting, and what evidence is available is conflicting. Charity et al. (2004) published an important paper demonstrating that AAE is negatively related to academic performance. Students who had more exposure to MAE demonstrated better outcomes on reading and language measures than did students with less exposure/familiarity with MAE. They found, for example, that both first and second grade students were better at imitating sentences spoken in MAE than were kindergarten students, and these grammatical sentence imitation tasks were highly correlated with three reading tasks (sight word reading, decoding and comprehension). Furthermore, they found overall that the children in the kindergarten through second grade age range varied widely in their knowledge and application of MAE forms, even within the same grade. Thus, children in second grade were better at imitating sentences spoken and written in MAE than were the younger, kindergarten-age children. This would seem to support the linguistic familiarity or dialect shifting hypothesis. However, their study was originally inspired by the mismatch theory of linguistic interference and their findings could be interpreted as consistent with this theory as well. Therefore, Charity, and colleagues concluded overall, that neither the mismatch hypothesis, nor the dialect awareness theory, as they referred to it, could definitively explain the results. In a study by Connor and Craig (2006), similar results supported the relationship between AAE and weaker language and literacy skills. Instead of a negative linear relationship between AAE and language or literacy skills, these authors found a U-shaped or complex relationship with these variables. Children whose speech infrequently contained elements of AAE performed better on academic measures, as did children who were heavy dialect speakers. Surprisingly, children who fell in the middle with moderate amounts of AAE dialect performed worse on vocabulary, sentence imitation and letter and word reading tasks. The authors concluded the findings supported the dialect shifting theory over the mismatch hypothesis. If there was a linguistic interference from their AAE dialect, the heaviest dialect speakers would exhibit the greatest difficulty, but in fact the opposite was true. They reasoned that low dialect speakers could easily imitate MAE because on a continuum, their speech was closer to

18 that model, facilitating a shift. Meanwhile, heavy AAE dialect use on the other end of the continuum indicated greater linguistic flexibility. The logic is as follows: AAE has been established as a phonologically and morphologically complex language variant of MAE. MAE on its own is also phonologically and morphologically complex. The ability to switch between both complex systems with many of their own rules and lexicons would require a highly sophisticated and metalinguistically mature language system. In fact, Craig and Washington (2004b) have written about the advanced language skills NMAE speakers must possess in order to navigate easily between their own dialect and MAE. In other research by Craig and Washington, (2004a), the dialect shifting hypothesis is strengthened. In their study, preschool through fifth graders produced increasingly less amounts of dialect as they advanced in school compared to younger preschool and kindergarten age students who did not have as much exposure to MAE and were still moderate to heavy dialect speakers. Specifically, preschool and kindergarten age children exhibited greater use of dialect features in their speech, (consistent with Connor & Craig, 2006) while older children used less, and this density decreased significantly beginning in first grade. In addition, the higher dialect use was associated with less dialect shifting ability, as well as poorer performance on reading measures, in contrast to the findings of Connor and Craig (2006). Thus, Craig and Washington (2004a) observed a negative linear relationship between AAE dialect use and academic performance. Further, they concluded that dialect shifting, which is characterized by a “sharp decline that occurs in a relatively short timeframe”, accurately describes the change in language use of NMAE speaking children exposed to MAE. This behavior was demonstrated in the context of conversational, or discourse-type tasks like oral reading, rather than structured academic activities like question and answer tests. The present study aims to elucidate the context in which dialect shifting may occur by examining structured language and literacy contexts. In contrast to her 2006 study on spelling where the mismatch hypothesis seemed to explain the findings, Terry et al. (in press) corroborated the dialect shifting ability of first graders as well as the non- linear relationship between dialect and vocabulary, phonological awareness (PA) and word recognition. First graders in their sample demonstrated a negative association between vocabulary and phonological awareness and higher amounts of NMAE dialect use. There was a significant effect of SES at the school level, however, which moderated the strength of this negative correlation. Nonetheless, this relationship seems to supports the mismatch hypothesis. To complicate the picture, however, a U-shaped association was found for word reading ability and NMAE dialect use, which supports the linguistic flexibility or dialect shifting theory. Terry et al. explain, however that this simple dichotomous explanation is not

19 sufficient. A negative linear relationship such as that found with the vocabulary and PA skills could be explained by a lack of dialect shifting or an unstable and emerging skill in these young children. In sum, these findings add to the complex and inconsistent picture of dialect’s connection to academic success indicating the need for further evidence to hopefully clarify our understanding and theoretical knowledge. So, it is clear from the small number of studies conducted to date, that NMAE dialect and the functional use of dialect shifting in learning contexts, is not well understood and many questions remain. Namely, is dialect use consistently associated with poor language or literacy performance? Do all students exposed to MAE code shift; that is, is this phenomenon always a spontaneous result of exposure to MAE? Can instruction influence the ability to migrate from one dialect to another? Can certain types of instructional regimes be associated with stronger outcomes on academic measures for students who speak NMAE? Furthermore, the current evidence has been primarily obtained from samples of African American children who speak AAE. Only Terry et al. (in press) has looked at samples of White children who use Southern English (SoAE), a cousin of AAE (Charity, 2008) and their ability to acquire MAE forms in their speech and writing. An addition to the current body of research would be answering some of these questions with White speakers of NMAE/SoAE as well African American students. Replication is necessary in order to verify the findings of these correlational studies as well as to add to the literature on this evolving theory. Individualizing Student Instruction. The very fact that differences exist between children’s academic performance ability seems to suggest that the one-size-fits-all approach of many schools is not the most effective way to teach children from varying cultural, SES and ethnic backgrounds. In fact, research supports the view that children as complex individuals, existing in complex systems of schools, and communities, do not fit a single model format (Labov, 1995; National Reading Panel. 2000; Strickland, 2001). These researchers have recognized for a long time that African American children in particular may require a more tailored approach. Labov (1995) advocated addressing students’ diverse instructional needs in an integrated classroom by “starting where the child is” (pp. 53). In other words, understanding the skills the child brings to the learning environment and crafting instruction around that foundation. He noted in particular that “most reading and programs appear to be based on the assumption that all students have the same underlying forms for words and the same grammar” (pp. 57). The clear implication is that they do not. African American children who speak NMAE, bring with them a unique set of values, experiences and linguistic skills that will be present with them in the

20 classroom when learning begins. And according to Labov (1995), a reversal of reading failure is only possible if the curriculum is tailored to meet the individual needs of all children. To this end, a relatively new area of research is investigating whether differences between children disappear when each child is given the appropriate amount and type of instruction for their individual needs. The work by Connor and colleagues at Florida Center for Reading Research, have been examining the effects of individualizing student instruction (henceforth, ISI) by studying the interplay of child characteristics such as level of vocabulary and language knowledge prior to the start of the school year, with instructional combinations to see if these child-by-instruction interactions can explain differences in performance between groups of children. Initial findings suggest that in fact, these interactions do occur and they robustly predict how children will perform on outcome measures of language and literacy. In other words, this line of research shows that effective and appropriate classroom instruction can influence students’ achievement (Connor, Morrison, & Underwood, 2007; Connor, Jakobsons, Crowe & Granger-Meadows, 2009). Foorman, Francis, Fletcher, Schatschneider, and Mehta, (1998) and Juel and Minden-Cupp, (2000) were among the first to record these child-by-instruction interactions in recent research. In their study with first and second grade children from low performing schools, Foorman and colleagues found that children who began the school year with low levels of phonological processing ability made greater gains on word reading measures with direct instruction in letter sound correspondences than children who received indirect instruction in these skills. Similarly, Juel and Minden-Cupp (2000) found that children in different leveled reading groups performed better by the end of the year depending not only on what skill level they began the year, but also by how the instruction met the needs of the initial skill level and allowed for greater growth. For example, children in the “low reading group” who had limited knowledge of the alphabet and could not read any words at the start of the year, but whom were placed in a classroom that primarily used indirect phonics instruction embedded within meaningful discussions of text reading, were relatively poor readers at the end of the year. By contrast, children who began the year with stronger decoding skills and were placed in the “high” reading group where their instructional time was on reading in context and passage comprehension and increased steadily with more independent practice on vocabulary and text reading, these students made exceptional gains. The researchers concluded that children with weaker PA skills require more direct instruction in phonics and letter-word level reading while students

21 whom have mastered these skills and are reading independently require more instruction in vocabulary and passage comprehension or meaning making with text. More recently, Connor et al. (2009), and Connor, Morrison, Fishman, Schatschneider, and Underwood (2007), have observed these child-by-instruction interactions with first grade children and their vocabulary and decoding skills. Additionally, there is correlational evidence for second through third grade children and reading comprehension tasks. Specifically, Connor et al. (2009) found that children with initially low decoding scores made greater growth in decoding given more instruction in teacher/child managed-code focused (TCM-CF) tasks aimed at improving their phonological skills such as explicitly identifying letter-sound correspondences, reading sight words, and blending and segmenting words into individual sounds and syllables. By contrast, students with proficiency in this area made no gains with more of this type of instruction. Instead, they performed better when challenged on their own with child-managed-meaning-focused (CM-MF) or explicit instruction in vocabulary and comprehension of and discussion about text they had read. Overall, research on the interaction between children’s language and literacy abilities and concurrent instruction strongly supports the continued investigation of other factors that may interact with instruction and affect children’s outcomes favorably. Dialect and instruction is one such area. Link between ISI and dialect shifting. This research on individualizing student instruction that Connor and colleagues (2007, 2009) are conducting may be relevant as the dialect literature is considered, in several important ways. Historically, Labov (1995), and now others (Fogel & Ehri, 2006), advocated for specific types of instruction for African American children speaking dialect, because they saw a need for these students to experience culturally aware education that takes their dialect traits into consideration. Specifically, Labov encouraged the use of graded reading books and cassette tapes in which the narrator presents the text in NMAE. This curriculum was meant to bridge the differences between NMAE and MAE (hence the program name BRIDGE) for inner city children encountering difficulty deciphering text written in an unfamiliar code. The general aim of the program was to “build on the resources that the child brings to school in order to achieve a mastery of the literacy skills needed in the general society” (pp.53). This is essentially a definition of individualizing student instruction from the current literature (Connor et al. 2009). This notion of going beyond the general curriculum to meet the needs of all children is reinforced by Fogel and Ehri (2006) who reported that “schools do not appear to be offering effective language or literacy instruction to NMAE-speaking students” (pp.466). Citing the negative perceptions

22 by teachers, and the over-identification and over-representation of NMAE, specifically AAE, speaking children on caseloads of special educators and speech-language pathologists, researchers see a need to bring individualized instruction to classically underserved populations (Oller et al., 2000; Rickford, 2004). To date, no study has examined the effect of individualized instruction on the reading and language skills of children who speak NMAE. Furthermore, individualizing student instruction may inform our understanding of the mechanisms involved in dialect shifting, which in turn could further inform the role of dialect and achievement. Perhaps tailoring instruction to the specific needs of students may incidentally result in more dialect shifting by children who are speakers of NMAE. Terry et al. (in press) has recently investigated the role of dialect within the social context of school, but has not directly investigated the interaction between instruction and dialect. These authors suggest future research focus its efforts on the interplay between dialect and instruction. Connor and Craig (2006) have also cited the need for studies that examine dialect and instruction together to determine whether a relation exists. Lastly, first grade is a critical year for struggling students to make progress in reading before falling dangerously behind (Connor, Morrison & Underwood, 2007). First grade is also a critical year for dialect shifting (Craig & Washington, 2004a). Thus combining both of these questions in one study satisfies a wide gap in this body of literature. In addition to the need to understand and resolve the achievement gap between African American students and White students, there is a general trend of African American children being underrepresented in empirical research on literacy development (Flowers, 2007; Lindo, 2006). In particular, Flowers (2007) recently called for papers addressing the reading skills of African American students as well as teacher perceptions of African American students including understanding their culture and implicit within that is the need to understand and be sensitive to NMAE and specifically AAE. Purpose of the study There are several purposes of the proposed study. First, I aim to further expand our understanding of NMAE, through a quantitative analysis of dialect features elicited from the DELV-S. Second, I intend to examine whether or not children’s use of dialect changes from fall to spring. Third, by examining the relations between NMAE and literacy skill growth, I hope to improve our understanding of the potential influence of dialect on school success and add to the literature on dialect shifting in order to inform evolving theories on the influence of dialect and literacy achievement.

23

Fourth, my study is the first to examine dialect in the context of ISI and could potentially lay the groundwork for identifying specific instructional needs of students who speak NMAE. Justification for outcome measures. The vital link between oral language and literacy acquisition is well established (Catts, Fey, Zhang & Tomblin, 1999; Snow et al., 2007). Yet, dialect as a component of oral language is relatively misunderstood and underrepresented in the large body of educational research (Craig & Washington, 2004a; Flowers, 2007; Lindo, 2006). Past endeavors have predominantly examined the vocabulary and phonological skills of children. A dearth of research exists in the area of general language skills, as well as overall reading ability comprising decoding and reading comprehension of children who are NMAE speakers. Dialect, language and reading are the main areas of interest in this investigation. Dialect has been clearly established as an area of continuing need for further analysis in educational research. Expanding our knowledge of dialect features, characteristics of children who speak dialect and how they perform academically all need further investigation. Previous research has focused on SES and cognitive- linguistic skills as they relate to dialect. This study adds another dimension; development. That is, how does dialect use relate to first grade children’s language and reading development? Research Questions 1. What is the nature and variability of NMAE (phonological, morphological, and syntactic features) observed in first grade children? a. What is the overall variability in NMAE use? b. What is the variability among NMAE types (e.g., phonological, morphosyntactic) and features assessed on the DELV-S? c. Do the predominant types and features vary by school context and by beginning versus end of the school year? 2. Does the amount and type of NMAE use change from fall to spring of the first grade year? a. What is the overall change from fall to spring? b. Do changes in NMAE use vary by school context? 3. To what extent are improvements in language and literacy measures demonstrated among first graders? a. How do these gains relate to dialect use (MAE versus NMAE)? b. Are specific features measured by the DELV-S associated with stronger or weaker language and literacy skill gains?

24

4. What is the effect of Individualizing Student Instruction (ISI) on children’s reading skills for children who are NMAE speakers compared to MAE speakers? a. Does the amount of NMAE use moderate the treatment effects associated with ISI?

25

CHAPTER 2

METHODS

Research design This is a secondary analysis using a descriptive correlational design. The children (n=694; mean age 6.0 years) who participated in this study were participating in a longitudinal investigation that was in its second year when the scores for this dataset were collected. The purpose of the larger, randomized control trial (RCT) was to examine the effect of individualizing student language arts instruction on reading outcomes. The intervention for the larger RCT experiment was the use and application of software that recommended appropriate amounts and types of instruction for literacy skills development (see Connor et al. 2009 for complete description). Schools and their teachers were matched and randomly assigned a treatment or control condition for Study 1 (2005-2006) and Study 2 (2006-2007). The researchers matched schools into pairs based on the number of children qualifying for free and reduced price lunch (FARL), third grade reading scores from the Florida Comprehensive Achievement Tests (FCAT), and Reading First status. Because the RCT used a waitlist control, teachers and students were in one of three conditions, control (n= 4 schools), ISI second year with monthly professional development (PD) (n= 6 schools, including the original pilot school), and ISI treatment first year with bi-weekly professional development (PD) (n=8) across the two studies. There was also a pilot school (2003-2008) where teachers helped develop and test the intervention. These teachers were considered part of the ISI maintenance group. Teachers participating in the treatment group were trained in the use of the algorithm-based software program called Assessment to Intervention (A2i), and taught how to conceptualize their language arts instruction along three dimensions that form the basis of the instructional recommendations from the A2i output: management, grouping and instructional content. The management dimension captures whether the child (“child-managed”), the teacher (“teacher- managed”), or a combination of both (“teacher-child managed”) is directing attention to the learning activity. The grouping dimension provides recommendations on the size of the learning group (whole class, large/small group, peer-assisted, or individual). The instructional content dimension provides recommendations on how much and what type of code-focused or meaning-focused instruction should be given to each child based on their individual needs as determined by the vocabulary and letter word reading scores used in the A2i software program. In the context of this study, code-focused refers to any

26 instruction that elucidates the sounds, letters, or rules of the code as they are applied to reading and writing tasks. For example, sight word reading, explicit instruction in the sound and letter correspondences or reading words for accuracy would all be considered code-focused instruction. Meaning-focused refers to any instruction that focuses the child’s attention on understanding or comprehending the larger message of the written code. Thus, previewing a text with pictures, or predicting and inferring what might happen in a story, writing about a story that was read, working with vocabulary definitions to learn meanings would all be classified as meaning-focused instruction in the context of this study.

After the teachers in the treatment groups learned the A2i software, they were given approximately 44 hours of professional development (35 at their school site and 9 in a workshop setting) on how to plan their instruction according to the recommendations from the software program, implementing specified amounts of instruction for each child, employing effective classroom management and organization to support individualized instruction, and using research-based literacy activities that support students’ skill needs as well as child-managed learning. There were no systematic differences among schools or researchers providing the professional development. Data for the present study were available for n= 694 students, of whom n=694 received the DELV-S, in fall of 2006, which was required for participation in the proposed study. Participants District, Schools, and Teachers. The 18 schools were located in an economically and ethnically diverse school district in the southeastern United States. Seven of these schools were active in the federally funded Reading First program aimed at improving reading achievement in high poverty schools. Specifically, the pilot school teachers were in the third year of ISI implementation, teachers in five schools were in the second year of ISI implementation, and teachers in seven schools were in the first year of ISI implementation. Teachers in four schools had yet to implement the intervention. A total of 69 first grade teachers participated in the control (n = 11), ISI maintenance (n = 14) or ISI Treatment (n = 44) groups. Teachers were predominantly female (87%) and Caucasian (64%). Table 2 provides descriptive statistics on the school demographics.

27

Table 2: School-wide demographics. Schools (n=18) % School % African % White % African % White FARL American American teachers Teachers School A 63.5 65.9 19 30.5 64 School B 89.2 72.3 21 School C 9.21 9.0 73.0 6.5 88.5 School D 30.2 16.2 78.0 5.7 88.4 School E 31.5 32.0 59.5 17.2 81 School F 8.7 10.5 83.1 8.6 89.1 School G 16.4 12.1 70.5 10.9 85.9 School H 64.3 79.2 14.1 18.3 79.5 School I 4.2 6.0 81.8 7.9 90.4 School J 35.5 38.3 41.0 8.0 88 School K 86.3 85.0 12.6 48.7 46.3 School L 82.3 79.3 6.1 41.8 55.8 School M 85.3 87.1 5.9 45.9 51.3 School N 15.5 13.0 80.8 5.6 90.5 School O 72.3 73.9 16.3 22.2 75.5 School P 89.8 72.7 15.4 33.3 59.2 School Q 45.5 38.5 54.2 19.5 70.7 Note. FARL= free and reduced lunch

Students. First grade children with informed consent from the larger randomized field study who received the DELV-S (N=694, 2006-2007), participated in this study. The larger randomized study did not exclude children who spoke English as a second language or those requiring special services. Approximately 10% of the larger sample included children with exceptional needs (e.g. specific language impairment, learning disability, sensory deficit, etc.). 2% of the larger sample was described as having English as a second language. Ethnically, children in this study were diverse, with the highest percentage being described as White (49.7%). African American comprised 40.5% of the sample and Other children comprised 9.8% of the sample. Information on race was individually reported by parent

28

questionnaire distributed to all participants at the start of the study. Fifty one percent of the children were boys and forty nine percent were girls. Descriptive statistics for the children are provided in Table 3.

Table 3: Descriptive statistics for sample (n=694) by gender and ethnicity Gender Ethnicity Total African American White Other Frequency Percent Frequency Percent Frequency Percent Frequency Percent Boys 353 100 134 38 181 51.3 38 10.8

Girls 147 43.1 164 48.1 30 8.8 341 100

Measures Student Assessments Dialect variation. Variation from MAE was assessed using the DELV-S (Seymour et al., 2003). The DELV-S is specifically designed to distinguish individuals who have language differences due to normal developmental growth, or due to the use of a dialect of MAE with language patterns influenced by cultural or regional factors, from those individuals with a clinical language disorder or delay. It is intended to be used with individuals who speak MAE or a variation from MAE (NMAE), such as African American English. Students in this study were administered the entire Screening Test which includes a measure of dialect variation as well as a measure of risk status for language disorder. Children’s speech can be classified as having some, strong or no variation from MAE (Seymour et al., 2003). The identification of language variation status, Part 1 of the Screening Test, asked children to repeat and answer cloze sentences reflecting different aspects of English phonology, morphology and syntax. For example, the examiner says “I see a smooth table” and the student repeats the sentence (smoov/smoo, smooth, other response, no response). Given culturally diverse pictures of people and actions, the examiner points and says, “I see little kites, I see a big kite. The boys have little kites, but the girl…” (have/got, has, other response, no response). The student’s completion of the sentence measures third person singular and

29 use of the morphological marker plural s. Responses are scored for the presence of MAE, NMAE or other dialects (such as Appalachian, Cajun or Southern English) and any other non-targeted response. Criterion scores obtained on the DELV-S Record Form were used along with percentage of dialect variation (DVAR). This continuous variable (described in detail in the procedures section), which was used in a previous study by Terry et al. (in press) was also used in the present study to measure the variability of dialect, relationship between dialect and language and literacy performance, and the effect of ISI for children who speak dialect. Language skills. Oral language skills, including semantic knowledge, lexical knowledge and phonological and morphosyntactic skills were measured through a variety of assessments. Semantic knowledge was assessed through the Academic Knowledge subtests from the Woodcock-Johnson Tests of Achievement-Third Edition (WJ3; Woodcock & Mather, 2001). Students were given a series of pictures or question prompts, and asked to identify or answer questions from three broad subject areas, Science, Social Studies and Humanities. For example, given the prompt “the people of Brazil live on which continent”, the student must provide a verbal response without visual cues (Woodcock & Mather, 2001). Thus linguistic processing requires the student to retrieve their word knowledge in order to comprehend this sentence and the world knowledge it is attempting to represent. Thus this task is distinct from a simple expressive vocabulary test or measure of lexical knowledge because it goes beyond simply applying word meanings (Hagoort, Hald, Bastiaansen, & Petersson, 2004). WJ3 subtests were chosen for their strong psychometrics. This subtest has a median reliability of .88 in the sample age range. W scores were used in the analysis. Lexical knowledge. Children’s lexical skills were assessed using the Picture Vocabulary subtest of the WJ3, which measures students’ vocabulary knowledge and expressive language skills. Given increasingly less familiar pictures, students name the picture with the most appropriate label. For example, given a picture of a cake with candles, the child is asked “what is this?” This subtest has a median reliability of .77 in the sample age range. Again, W scores were used for all analyses. Phonological and morphosyntactic skills were assessed through the DELV-S Part II. The series of items on this portion of the test are used to determine risk level for potentially developing language difficulties. Children were asked to respond to questions from pictures and provide the answer in a cloze sentence. Stimulus items contained elements of English grammar that are shared among MAE and NMAE speakers as well as those that are contrastive features between the dialects. The shared features are those that identify a language disorder when omitted, and the contrastive features represent

30 aspects of the language that are found only in dialects of MAE. For example, the examiner points to a picture and says “Today the sun is shining very brightly. But yesterday this boy had to stay in the house because…” and the response (it was raining, it’s raining, rained/raining, no response) targets the copula verb was. If included in the response, use of the copula represents MAE and no risk for language disorder. If excluded, and the speaker does not use NMAE, this response becomes part of the risk status score, positive for risk of a language disorder. If the speaker uses NMAE, and omits the copula, this feature is consistent with the dialect (zero copula) and so the specificity of the test identifies that speaker as a non-disordered NMAE speaker. The decision consistency for Part II of the DELV-S (Seymour, et al., 2003), based on the same sample of children and examiners from the Part I reliability study, revealed that 36% of the children were classified identically on both administration of the test. 48% differed by one category and 16% differed by two. The study authors note that none of the children were completely misidentified (either for language disorder or at lowest risk for disorder). The correlation coefficient was .80 between the two test scores. Reading skills Word Reading Skills. Children’s word reading skills were assessed using the Letter-word Identification subtest of the WJ-III. Beginning with letter recognition and identification, children are then asked to read word lists of increasingly unfamiliar words. The reliability of this assessment is .91. W scores were used in the analyses. Children’s overall reading ability was assessed using the Passage Comprehension subtest of the WJ-III. This test assesses students’ ability to initially match pictures to words, then to phrases and finally requires students to read a short passage (2 sentences) and provide the correct word to finish a cloze sentence. Cloze sentences tasks are known to require substantial decoding ability as well as comprehension skill. A recent study by Keenan, Betjemann, and Olson, (2008) determined that the passage comprehension test of the WJ3 had significant amounts of word decoding and listening comprehension contributing to students’ overall scores (aprox. 22%-30% listening comprehension; 30%-61% decoding). The Simple View of Reading (Hoover & Gough, 1990; Roberts & Scott, 2006) defines reading as the product of decoding and linguistic comprehension. Thus, the Passage Comprehension test is a good measure of overall reading ability combining both skills in one assessment. W scores were used in the analyses.

31

Demographic assessments Demographic information was obtained through school records, teacher questionnaires, and parent questionnaires. Teachers completed a background questionnaire from which years of experience and education were obtained. Parents completed a questionnaire during the study year from which information on education level and home literacy environment were obtained. Questions about the frequency of using a library card, amount of literacy material (magazines, books, newspapers) in the home, time spent reading aloud together, or alone, and hours of television viewing all contributed to the composite home literacy measure. A copy of this questionnaire is included in the Appendix A. Procedures Children were assessed individually during the fall and spring of the 2006-2007 school years, using alternating forms of the WJ-III, in a quiet area of the school. All measures were administered and scored in standardized format by trained research staff. Computing percentage of dialect variation (DVAR). No other research prior to the Terry et al. (in press) study, has used the DELV-S in quantitative analyses. Thus, a statistically relevant measure of dialect variation had to be devised. Charity et al. (2004) used a “summary score” to capture the percentage of times children produced verbatim responses, dialectally different responses or made a memory error on a sentence imitation task. The DVAR used in this study is roughly comparable to this researcher-created summary score. Each item on Part I of the DELV-S, can receive a score of one in either Part A (response varies from MAE), or Part B (response is MAE). Part C responses are non- targeted productions and consequently represent a response that could not be scored (they are not computed as part of the language variation status), thus are not included in the DVAR formula. In order to create DVAR, I divided the total score for Part A by the sum of Parts A and B (i.e., the total number of items that could be scored). This number was then be multiplied by 100 to obtain a percentage. DVAR, therefore, represents the percentage of scored items that were observed to vary from MAE. For example, a child who obtained a score of 12 in part A and a score of one in part B, would receive a DVAR score of 92%. Thus, an interval variable was created that reflects dialect variation across a range of production (some, heavy, none) while controlling for the number of items that could be scored. This procedure effectively limited the possibility of misidentifying a student’s dialect use that otherwise might occur from the use of the test-created scoring method. Witness, a child who produces MAE responses that are not scorable (a part C response) because they do not meet the target criterion, would be wrongly identified as speaking dialectal variation from MAE.

32

Computing language and literacy scores. W scores, a variation of the Rasch ability scale, were computed from the raw scores of the WJ-III subtests. That is, a score of 500 represents the achievement of a typical 10-year-old child (SD = 15).

33

CHAPTER 3

RESULTS

A combination of visual inspection, descriptive statistics and modeling techniques were used to analyze the data. Due to the nested structure of the data (children nested in classrooms), Hierarchical linear modeling (HLM; Raudenbush, Bryk, Cheong, Congdon & du Toit, 2004) was used to answer questions two through four where change in dialect usage and growth in language and literacy skills were investigated. Two and three level models were constructed with child (DVAR and other test scores; level 1), classroom (teachers; level 2) and school (school SES; level 3) variables. For all models continuous variables were grand mean centered and categorical variables such as ethnicity, gender and DELV-S categories were dummy coded (with 1 being the variable of interest, i.e. African American, boys, MAE). Beginning with the unconditional model, intraclass correlations (ICC) were calculated. The models were built systematically until a best fitting model was achieved. Question 1: What is the nature and variability of NMAE (phonological, morphological and syntactic features) observed in First Grade children? Phonological Feature Analysis Variability by race. The data were analyzed using descriptive statistics and visual inspection of the DELV-S results. Forty percent of the total sample of first grade children used NMAE (197+78/694; see Table 4). Among this group, children displayed varied amounts of dialect use. African American children were more likely to use NMAE than were White children or children classified as Other (Hispanic, Asian, bi-racial, etc.). Other children were slightly more likely to provide NMAE responses than were White children. Figure 1 displays the results for these findings.

Table 4: Total dialect use by First grade children on the Diagnostic Evaluation of Language Variation Screening Test (DELV-S)

Category of Dialect Variation Frequency Percent

Mainstream American English (MAE) 419 60.37 Some variation from MAE 78 11.2 Strong variation from MAE 197 28.38

34

Table 4. Continued.

Category of Dialect Variation Frequency Percent

Total 694 100.0

100 90 80 70 60 50 MAE 40 Some NMAE Percentage Strong NMAE 30 20 10 0 African White Other American

Figure 1. Frequency of NMAE use by ethnicity

In general, the use of phonological features decreased from the fall to the spring in each ethnic group. All three phonological features (substitution of [f] for [th], items 1&2; substitution of [v] or [d] for [th], items 3&4; final consonant cluster reduction, item 5) were used by all three ethnic groups in the fall, with the greatest use of final consonant cluster reduction in all three groups (e.g. “gift” becomes “gif”). Specifically, in the fall, 58.4% of African American students produced this form, proportionally more than White students (24.3%) in this sample [Chi Square (2) = 75.30, p< .000]. Other students produced final consonant cluster reduction 36.8% of the time. Table 5 provides the percentages of use

35 for the various phonological features among the different ethnic groups in the sample. Figure 2 provides the use of features by ethnicity.

Table 5: Percentage of use of DELV-S Phonological Items by ethnicity from fall to spring Feature Ethnicity Fall Spring Substitution African American (n = 281) 51.6% 36.6% of [f] for [th] White (n = 345) 15.6% 9.56% (2 items) Other (n =68 ) 22.0% 18.3% Substitution African American 52.8% 39.5% of [v] or [d] White 23% 14.6% for [th] Other (2 items) 27.9% 19.8% Final African American 58.4% 52.7% consonant White 24.3% 22.6% cluster Other reduction 36.8% 22.1% (1 items)

Analysis of specific items and features on the DELV-S revealed additional patterns. Questions one and two on the DELV-S, target the replacement of final [th] with [f], such as “I see her brushing her teef”. African American students used this feature more than three times as often as White students [Chi Square (4) = 1.22, p < .000]. Substitution of [v] or [d] for [th] as in “I see that fish breave under water” was also found among both African American children and White children, with more children using the [v] replacement over the [d]. Given the opportunity to either omit or replace the final consonant in words ending with [th] and followed by a word beginning with a consonant (e.g. “I see a smooth table”), it was more common for African American speakers with the strongest variation from MAE to replace [th], rather than omit the phoneme altogether. Variability by gender. There were no differences in the DELV-S dialect categories among boys and girls overall. Percentages are provided in Tables 6 and 7 below.

36

Table 6: Percentage of Dialect Variation (mainstream American English (MAE), some variation from MAE, and strong variation from MAE), with ethnic groups, African American (AA), White, (WHT), and Other (OTH). Categories of Dialect Variation Ethnicity Gender MAE Some var. Strong var. from MAE from MAE AA Boys 20.9% 16.4% 62.7% Girls 28.6% 12.9% 58.5% Total 24.9% 14.5% 60.4% WHT Boys 86.7% 6.6% 6.6% Girls 91.5% 6.7% 1.8% Total 88.9% 6.6% 4.3% OTH Boys 52.6% 23.7% 23.7% Girls 73.3% 16.7% 10.0% Total 61.7% 20.5% 17.6%

Table 7: Percentage of Dialect use by Ethnicity and Gender Categories of Dialect Variation Gender and Ethnicity MAE Some Strong Total Boys AA Count 28 22 84 134 % within 20.9% 16.4% 62.7% 100.0% AA_Wht_Other % within Categories 13.7% 51.2% 80.0% 38.0% Dialect Variation WHT Count 157 12 12 181 % within 86.7% 6.6% 6.6% 100.0% AA_Wht_Other % within Categories 76.6% 27.9% 11.4% 51.3% Dialect Variation OTH Count 20 9 9 38 % within 52.6% 23.7% 23.7% 100.0% AA_Wht_Other % within Categories 9.8% 20.9% 8.6% 10.8% Dialect Variation Total Count 205 43 105 353

37

Table 7: Continued. Categories of Dialect Variation Gender and Ethnicity MAE Some Strong Total % within AA_Wht_Other 58.1% 12.2% 29.7% 100.0% Girls AA Count 42 19 86 147 % within Ethnicity 28.6% 12.9% 58.5% 100.0% % within Dialect Variation 19.6% 54.3% 93.5% 43.1% WHT Count 150 11 3 164 % within Ethnicity 91.5% 6.7% 1.8% 100.0% % within Dialect Variation 70.1% 31.4% 3.3% 48.1% OTH Count 22 5 3 30 % within Ethnicity 73.3% 16.7% 10.0% 100.0% % within Dialect Variation 10.3% 14.3% 3.3% 8.8% TOTAL Count 214 35 92 341 % within Ethnicity 62.8% 10.3% 27.0% 100.0% % within Dialect Variation 100.0% 100.0% 100.0% 100.0%

Chi-square tests revealed that African American girls were no more likely to speak MAE, some variation from MAE, or strong variation from MAE than were African American boys [Chi-square (2) = 2.44, p < .294]. White girls were no more likely to speak MAE, some variation from MAE, or strong variation from MAE than were White boys [Chi-square (2) = 4.77, p < .092]. Girls classified as Other in this sample were no more likely to speak MAE, some variation from MAE, or strong variation from MAE than were boys classified as Other [Chi-square (2) = 3.34, p < .188]. Notably, when density (DVAR) was considered, there were no differences by gender among African American children but boys were significantly more likely than were girls to have higher DVAR percentages in the White and Other groups [White, t(343) = 2.44, p = .015; Other t(66) = 2.18, p = .033].

38

50

45 [f] for [th] 40 [v/d] for [th] 35

30 Final Cluster Reduction 25 3rd person sing./SVA 20 Percentage have/has 15

10 don't/do not

5 was/were 0 Boys Girls

Figure 2. Frequency of NMAE use by gender and ethnicity

Additionally, phonological feature use was higher among boys than girls in the fall across all features. Only the difference between the use of [f] for [th] in words like “teef” and “baf” was significant [Chi- square (2) = 10.030, p < .007]. The highest use for both genders was final consonant cluster reduction. 41% of boys used this feature and 37% of girls used this feature [Chi-square (1) = .911, p < .340]. The next most common form was substitution of [v] or [d] for [th]. On average, 39% of boys used this form and 32% of girls [Chi-square (2) = 4.919, p < .085]. Table 8 provides the percentages for each feature by gender.

Table 8: Percentage of phonological feature use in the fall, by gender Feature Gender

Boys (n=353) Girls (n=341) Substitution of [f] for [th] (two items) teef 32.5% 25.8% baf 38.2% 26.6%

Substitution of [v] or [d] for [th] (two items)

39

Table 8: Continued. Feature Gender smoov/smoo 40.2% 31.6% breave/breade 37.9% 32.8%

Final consonant cluster reduction (one item) Gif 41% 37.5%

Morphological Feature Analysis Variability by race. In general, the use of morphological features decreased from the fall to the spring in each ethnic group. The four morphosyntactic features (omission of third person singular, items 8-11; subject verb agreement with have/got, items 6-7; use of don’t/doesn’t for 3rd person singular, items 12-13; use of copula was/were with plural subject items 14-15) varied in the number of speakers using them with proportionally the greatest number of users being African American children, followed by White children and then Other children. The most frequent use was among items 8-11, omission of the third person singular as in “the girl always sleep.” Specifically 72% of African American students included this morphological feature in their dialect in the fall. By comparison, only 20% of White students used this feature in their speech and 41% of children classified as Other used third person singular in place of plural [s] in their speech in the fall. Third person singular do/do not, as in “this boy don’t” (in place of does/does not), was used less by all ethnic groups in the fall, as was the copula was/were (e.g. “they was dirty”), which was used by 33% of African American students in the fall. Results for all student ethnic groups are available in Table 9.

Table 9: Percentage of use of DELV-S Morphological Items by ethnicity from fall to spring Feature Ethnicity Fall Spring Omission of African American (n =281 ) 72.1% 70.4% rd 3 person White (n =345) 19.9% 14.05% singular Other (n =68 ) (four items) 41.1% 37.1% Subject verb African American 44.6% 35.9% agreement White 5.3% 5.3% with have/got Other (two items) 17.6% 11.7%

40

Table 9: Continued. Feature Ethnicity Fall Spring Use of African American 40.9% 38% don’t/do not White 5.1% 5% rd for 3 person Other singular 2.9% 0% (two items) Use of copula African American 33.4% 20.6% Was/were White 34.7% 2.8% (two items) Other 8.8% 6.6%

Variability by gender. Visual inspection of the data by gender revealed gender differences in dialect forms used. Of the four morphological forms targeted on the DELV-S, girls used three of them in the fall with greater frequency than the boys, but this finding was not statistically significant [Chi- square (10) = 12.464, p < .255]. For both genders, the greatest use was omission of third person singular as in “the boy always ride”. There were no statistically significant differences between boys and girls on the use of this feature either. Interestingly, boys and girls were nearly equal in the frequency with which they used don’t/do not with third person singular. Percentages for all morphological features by gender are found in Table 10.

Table 10: Percentage of morphological feature use in the fall, by gender Boys Girls Feature n =353 n =341 3rd person singular with plural subject 41.9% 44.4% (4 items) Subject verb agreement with have/got 24% 20.8% (2 items) Use of don’t/do not for 3rd person singular 18.5% 20.2% (2 items) Use of copula Was/were 15.1% 17.1% (2 items)

41

80

70 [f] for [th]

60 [v/d] for [th]

50 Final Cluster Reduction 40 3rd person sing./SVA 30 Percentage have/has 20 don't/do not 10 was/were 0 African White Other American

Figure 3. Use of features by ethnicity

Question 2: Does the amount and type of NMAE (i.e., DVAR) use change from fall to spring of the First Grade year? What is the initial DVAR status and overall change from fall to spring? Does initial status and changes in NMAE use vary by school context and student’s ethnic group? HLM models were built beginning with the unconditional model from which the intraclass correlations (ICC) were derived. The ICC reflects the proportion of variance in each outcome (DVAR) between groups (i.e. classrooms), which were computed to be .09 or 9% of the variance in children’s use of dialect falls between classrooms. The three level unconditional model included DVAR as the outcome variable and month of evaluation (centered at zero being August) as the means by which change over time was measured. The school level variable of SES was measured by the percent of students receiving free and reduced lunch (FARL). Table 11 provides the descriptive statistics for the unconditional model from HLM. The three level model equation (see equation 1) is as follows: Level 1 Model (1)

DVARijk= Π0jk + Π1jk (month of evaluationijk) + eijk

42

Level 2

Π0jk = 00k + 01k (African Americanjk) + 02k(Whitejk) + r0jk Π1jk = 10k + 11k (African Americanjk) + 12k(Whitejk)

Level 3 00k = 000 + 001 (School SESk- Grand meanSchool SES.) + u00k 01k = 010 02 020 10 100 (School SESk- Grand meanSchool SES.) + u10k 11k 110 12k 120

Table11: HLM Descriptive statistics for modeling change in DVAR from fall to spring Variable Name Mean Standard Deviation Level 1 (repeated measures) DVAR 31.89 31.24 Month of evaluation 5.30 3.47 Level 2 (child) Mean Standard Deviation Child Ethnicity African American (proportion) 0.41 0.49 White (proportion) 0.50 0.50 Fall Vocabulary W score 480.91 10.48 Level 3 (classroom) Mean Standard Deviation School SES 49.61 28.79 Treatment 0.84 0.37 Child Ethnicity African American (proportion) 0.41 0.49 White (proportion) 0.50 0.50 Note. DVAR= dialect variation; SES= socioeconomic status

Results of the three level HLM (see Table 12) revealed a significant negative slope (100 = -.85) for DVAR indicating that, on average, children’s dialect variation decreased from fall to spring in first grade. Results also showed a fitted mean fall DVAR for African American students of 60% (37.082 + 23.163, from the table below), which indicates on average African American children were generally starting the school year 23 points higher on their dialect usage in the fall than Other students and 35 points higher than White students. Furthermore, White and Other students were decreasing their use of NMAE by almost a point (-.857) each month. Figure 4 below illustrates the decrease in feature use from fall to spring for the entire sample of first graders.

43

43 41 39 37 35 33 DVAR (%) 31 29 27 25 August January May

Figure 4. Change in dialect use from fall to spring

To examine whether children’s initial DVAR scores and rate of change in DVAR varied by school context (specifically SES) and race, I added child level ethnicity and school SES (FARL) into the model at the level of the child and then at the level of the school, respectively. The data reveal that fall status and changes in NMAE use vary by school context. As the number of children qualifying for FARL increases at a given school, the fall DVAR score also increases by .372 points for each increasing percentage point of school SES (see Table 12 & Figure 5). In Figure 5, School SES was modeled at the 25th and 75th percentile for the sample (which is 85% FARL and 15% FARL respectively) The rate of change in DVAR scores was not significantly affected by school SES (p=.298). However, there were differences in the rate of change between ethnicities. Notably, White children who were low SES, had higher DVAR scores than African American children who were high SES. In particular, African American children’s dialect decreased at a faster rate, over a point a month (-1.024) compared to White students. Once school SES was entered into the model, the rate of change for White students was no longer significantly different from African American students. The final model and coefficients are presented in Table 12. Figure 5 shows the rate of change by school context and ethnicity.

44

Table 12: Three level model of change in DVAR showing estimation of fixed effects (with robust standard errors).

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept, 000 37.082 3.599 65 0.000 School SES, 001 0.372 0.041 65 0.000

African America, 010 23.163 4.391 688 0.000

White 020 -11.758 3.959 688 0.004

Month of Eval, 100 -0.857 0.355 65 0.019 School SES, 101 0.005 0.005 65 0.298 African American, 110 -1.024 0.441 1367 0.020 White, 120 0.357 0.396 1367 0.367 Note. df= degrees of freedom; SES= socioeconomic status; Eval= evaluation.

70.00

Low SES, 25% FARL African American 53.75 (39.684) Low SES, 25% FARL

DVAR Wht, Oth (39.684) 37.50

High SES, 75% FARL 21.25 African American (-34.972)

High SES, 75% FARL Wht, Oth (-34.972) 02.25 4.50 6.75 9.00 Month

Figure 5. Rate of change by school context and ethnicity

45

Question 3: To what extent are improvements in language and literacy measures demonstrated among first graders? How do these gains relate to dialect use (MAE versus NMAE)? Are specific features measured by the DELV-S associated with stronger or weaker language and literacy skill gains? Overall, children in first grade classrooms improved from fall to spring on all outcome measures. NMAE dialect use variably related to the language outcomes ranging from no relation at all, to significant negative relationship. On both literacy measures, NMAE use was negatively associated with children’s scores. Consistent with the association between dialect and achievement, children’s use of both phonological and morphological forms on the DELV-S also were negatively associated with gains in language and literacy outcomes. However, due to the interrelatedness of both linguistic forms, only morphological features remained significantly related to outcomes in the final models. In particular, children’s scores on the picture vocabulary, letter word reading, and passage comprehension subtests decreased with the use of morphological features. Only the general knowledge subtest (WJ3-AK) outcomes were unrelated to the use of morphological forms. Tables 13 through 21 provide final models and coefficients for these predictors and outcomes and details for each model are provided below. Two level HLM models were created for each language and literacy outcome to answer questions about gains in language and literacy skills from fall to spring. Beginning with the unconditional model which used the spring language and reading scores for each outcome variable, an ICC was computed to determine the variance accounted for by the level 2 classroom variable. A list of all the ICCs for each language or literacy outcome is provided in Table 13 along with the means and standard deviations from the descriptive statistics for each language and literacy unconditional model.

Table 13: Intraclass Correlations (ICC) and Descriptive Statistics for language and literacy outcomes LANGUAGE ICC DESCRIPTIVE DESCRIPTIVE OUTCOMES STATISTICS STATISTICS MEANS STANDARD DEVIATIONS Fall Spring Fall Spring Fall AK score .09 470.21 474.89 22.79 21.40 Fall PV score .151 479.71 485.36 10.77 11.00 DELV-S Part II .062 4.88 3.45 3.62 3.22 Total Risk Score

46

Table 13: Continued. LITERACY ICC DESCRIPTIVE DESCRIPTIVE OUTCOMES STATISTICS STATISTICS MEANS STANDARD DEVIATIONS Fall Spring Fall Spring Fall LW score .140 411.93 459.60 31.70 26.82 Fall PC score .152 443.58 469.47 24.84 15.87 Note. ICC=intraclass correlations; AK= academic knowledge; PV= picture vocabulary; DELV-S= Diagnostic Evaluation of Language Variation Screening Test; LW= letter word identification; PC= picture vocabulary.

The models for each outcome were built systematically beginning with school SES entered at level 2, followed by the fall literacy score entered at level 1 and finally DVAR score entered at level 1. SES was chosen as the first predictor because it was assumed to explain most of the between classroom variance in the model. Language Outcomes. Semantic knowledge was captured through the WJ3 Academic Knowledge (AK) subtest. HLM results revealed that first grade children’s spring word and sentence knowledge in the context of various subjects increased from the fall, and were positively predicted by their fall knowledge of academic subjects (fixed effect of fall AK was .865) (see equation 1 and table 14). School SES exerted a negative effect on spring AK scores, though this latter effect was minimal (.04 with a p value of .035). As the number of children who qualified for free and reduced price lunch increased at a given school, the children’s semantic knowledge scores decreased overall. Table 14 provides the final model coefficients. Use of NMAE did not significantly impact spring gains in AK, nor were the specific use of phonological or morphological features associated with the gains children made in AK. The two-level model equation (see equation 1) is as follows: Level-1 Model (1)

Spring AK score = 0 + 1*(Fall AK score) + 2*(Fall DVAR) + R Level-2 Model

0 = 00 + 01*(School SES) + U0

1 = 10

2 = 20

47

Table 14: Two level model of AK showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring Academic Knowledge (AK) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept, 00 475.246 0.474 65 0.000 School SES, 01 -0.039 0.018 65 0.035

Fall AK score, 10 0.865 0.060 541 0.000

Fall DVAR, 20 -0.009 0.023 541 0.700

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 2.334 80.782 65 0.090 R (Child level) 77.399 Note. df= degrees of freedom.

Lexical knowledge was measured with the WJ3 picture vocabulary (PV) test which is an assessment of both expressive language skills and vocabulary knowledge. HLM results revealed that children’s gains on this subtest were positively predicted by their fall scores on the same subtest. Fall dialect variation (DVAR) had a negative effect on the spring outcome gains (i.e., residualized change) for PV. For every point increase above the mean DVAR, children’s lexical scores decreased .052 points per month. Furthermore, in contrast to other language measures, school SES did not significantly affect (positively or negatively) children’s performance on PV so it was trimmed from the model. The fixed effect for school SES was -.021 with a p value of .075 and was also trimmed from the model. Table 15 provides the final model with coefficients. The two-level model equation (see equation 2) is as follows: Level-1 Model (2)

Spring PV score = 0 + 1*(Fall PV score) + 2*(Fall DVAR) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

48

Table 15: Two level model of PV showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring Picture Vocabulary (PV) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 485.425 0.273 66 0.000

Fall PV score, 10 0.716 0.029 688 0.000

Fall DVAR score, 20 -0.061 0.009 688 0.000

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 0.814 74.636 66 0.218 R (Child level) 43.317 Note. df= degrees of freedom.

To examine whether the specific use of either phonological or morphological features from the children’s dialects impacted their spring PV scores, each of these predictors were added to the model at level 2. Individually, each set of features negatively affected the outcome, however, phonological features were no longer significantly related to the spring LW scores when included in the model with morphological features (likely due to the correlation among the linguistic forms). Thus to obtain the most parsimonious model phonological features were removed. In addition, school SES was added back into the model at level 2 to see if it might account for some variance. The final model shows that the use of morphological features in first grade children’s dialect significantly, negatively influenced their spring performance on the PV subtest with a decrease in spring scores by .43 points per month. Furthermore, school SES is a significant predictor of spring PV in this model with a negative relation to the outcome. For every point increase above the mean in the school-wide percentage of children who qualified for FARL, student’s PV scores decreased by .03 points in the spring. Table 16 shows the final model and coefficients for these predictors. The two level model equation (see equation 3) is as follows: Level-1 Model (3)

Spring PV score = 0 + 1*(Fall PV score) + 2*(Fall DELV-S Morphological Items ) + R Level-2 Model

0 = 00 + 01*(School SES) + U0

1 = 10

2 = 20

49

Table 16: Spring PV and school SES outcomes with morphological feature use (with robust standard errors). The outcome variable is Spring Picture Vocabulary (PC) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 485.159 0.280 65 < 0.001 School SES, 01 -0.028 0.012 0.028

Fall PV score, 10 0.731 0.030 688 0.000

Morphological Features, 20 -0.431 0.103 688 0.000

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 0.800 72.968 65 0.233 R (Child level) 43.596 Note. df= degrees of freedom.

The final language measure analyzed was Part II of the DELV-S which examines children’s risk for language disorders based on a series of questions addressing morphological, syntactic, and general processing abilities. The total raw score for spring Part II was entered as the outcome variable. The ICC was .062. Children’s Part II DELV-S score, decreased from the fall to the spring (i.e., risk for language disorder decreased). The final model revealed that the fall Part II DELV-S did not significantly predict the spring outcome score. However, the degree to which children used dialect in the fall (DVAR) significantly predicted an increased risk for language disorders in the spring. School SES was entered into the model as the first predictor but when the fall risk score was added, SES no longer significantly predicted the outcome thus it was removed from the model. Furthermore, neither the phonological or morphosyntactic items predicted the spring language outcome so they were also trimmed from the final model. Table 17 provides the coefficients for the final model. The two-level model equation (see equation 4) is as follows: Level-1 Model (4)

Spring Part II DELV-S score = 0 + 1*(Fall DVAR) + 2*(Fall Part II DELV-S score) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

50

Table 17: Two level model of Risk for Language Disorder (DELV-S Part II scores) showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring DELV-S Part II Score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 3.449 0.090 66 0.000

Fall Language Disorder Risk 0.005 0.003 691 0.111 (DELV-S Part II score), 10

Fall DVAR score, 20 0.543 0.043 691 0.000

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 0.004 86.760 66 0.044 R (Child level) 6.336 Note. df= degrees of freedom.

Literacy Outcomes. Children’s word reading skills were measured with the WJ3 Letter Word Identification (LW) subtest. Children’s scores significantly improved from fall to spring. In the final model, school SES did not significantly predict growth in word reading ability, thus it was removed from the model. But DVAR had a significant negative effect on the spring LW outcome. Children’s LW scores decreased .11points per month for every point increase in DVAR score. The two level model equation (see equation 5) is as follows: Level-1 Model (5)

Spring LW score = 0 + 1*(Fall LW) + 2*(Fall DVAR) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

Table 18: Two level model of LW showing estimation of fixed effects (with robust standard errors). The outcome variable is Spring Letter Word Identification (LW) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 459.610 0.836 66 0.000

51

Table 18: Continued.

Fixed Effects Coefficient Standard Error Approx. df p-value

Fall LW score, 10 0.620 0.023 688 0.000

Fall DVAR score, 20 -0.109 0.022 688 0.000

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 18.942 118.602 66 0.000 R (Child level) 245.412 Note. df= degrees of freedom.

In order to investigate whether the specific use of either phonological or morphological features from the children’s dialects impacted their spring LW scores, each of these predictors were added to the model at level 2. Individually, each set of features negatively predicted the outcome, however, phonological features were no longer significantly related to the spring LW scores when included in the model with morphological features. Thus to obtain the most parsimonious model, phonological features were removed. The final model shows that the use of morphological features in first grade children’s dialect significantly, negatively influenced their spring performance on the LW subtest with a decrease in spring scores by a full point per month. Table 19 shows the final model and coefficients for this predictor. The two level model equation (see equation 6) is as follows: Level-1 Model (6)

Spring LW score = 0 + 1*(Fall LW) + 2*(Fall DELV-S Morphological Items) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

Table 19: Spring LW outcome with morphological feature use (with robust standard errors). The outcome variable is Spring Letter Word Identification (LW) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 459.532 0.838 66 0.000

52

Table 19: Continued.

Fixed Effects Coefficient Standard Error Approx. df p-value

Fall LW score, 10 0.632 0.023 688 0.000

Morphological Features, 20 -1.003 0.245 688 0.000

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 18.733 117.800 66 0.000 R (Child level) 247.400 Note. df= degrees of freedom.

The Passage Comprehension (PC) subtest of the WJ3 was used to capture first grade children’s overall reading ability. The task employs cloze sentence items where children respond to questions from the text, thus measuring receptive reading ability. School SES did not significantly predict growth in word reading ability, thus it was removed from the model. The final HLM model revealed a fitted spring mean of 470.6 for children beginning the fall with mean reading scores. DVAR had a significant negative impact on PC score gains with a .11 decrease in overall reading ability by the end of the school year for each point increase in dialect use above the mean. The two-level model equation (see equation 7) is as follows: Level-1 Model (7)

Spring PC score = 0 + 1*(Fall PC) + 2*(Fall DVAR) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

Table 20: Two level model of PC showing estimation of fixed effects (with robust standard errors)

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 470.606 0.581 66 0.000

Fall PC score, 10 0.400 0.028 553 0.000

53

Table 20: Continued.

Fixed Effects Coefficient Standard Error Approx. df p-value

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 6.214 112.041 66 0.001 R (Child level) 117.942 Note. df= degrees of freedom.

In order to investigate whether the specific use of either phonological or morphological features from the children’s dialects impacted their spring PC scores, each of these predictors were added to the model at level 2. Again, individually, each set of features negatively influenced the outcome, however, phonological features were no longer significantly related to the spring PC scores when included in the model with morphological features. Thus, phonological features were removed. The final model shows that the use of morphological features in first grade children’s dialect significantly, negatively affected their spring performance on the PC subtest, controlling for fall score, with a decrease in spring scores of more than a point for each increased use of morphological features (-1.03). Table 21 shows the final model and coefficients for this predictor. The two level model equation (see equation 8) is as follows: Level-1 Model (8)

Spring PC score = 0 + 1*(Fall PC) + 2*(Fall DELV-S Morphological Items) + R Level-2 Model

0 = 00 + U0

1 = 10

2 = 20

Table 21: Spring PC outcome with morphological feature use (with robust standard errors).

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 470.497 0.573 66 0.000

Fall PC score, 10 0.420 0.028 553 0.000

Morphological Features, 20 -1.025 0.221 553 0.000

Random Effects Variance Chi-square Df p-value

54

Table 21: Continued. Random Effects Variance Chi-square Df p-value

U0 (Classroom level) 5.860 107.913 66 0.001 R (Child level) 120.824 Note. df= degrees of freedom.

Question 4: What is the effect of Individualizing Student Instruction (ISI) on children’s reading skills for children who are NMAE speakers compared to MAE speakers? Does the amount of NMAE use moderate the treatment effects associated with ISI?

In order to examine whether the ISI treatment condition influenced reading skills among the group of children speaking NMAE, treatment was added to the model at level 2 along with school SES. The ICC was computed to determine between classroom variance, which was found to be .139 or 14% of the variability in Spring LW scores falls between classroom. Descriptive statistics for this two level model are provided in Table 22.

Table 22: Descriptive statistics for two level model of treatment and DVAR. LEVEL1 DESCRIPTIVE STATISTICS - Child Variable Name N MEAN SD WJ3-Fall LW 1104 411.93 31.70 score WJ3-Spring LW 975 459.60 26.82 score Fall DVAR score 693 35.73 32.06 LEVEL 2 DESCRIPTIVE STATISTICS - Classroom Variable Name N MEAN SD School SES 73 51.77 29.04 Treatment 73 0.84 0.37 Note. N= number; SD= standard deviation.

HLM results showed that the ISI treatment significantly affected children’s LW reading scores (p=.035) and that on average, being in the treatment group predicted an increase in spring LW scores that was 3.5 points higher than the control group. School SES did not predict spring outcomes in LW reading skills. Children with higher fall LW scores demonstrated higher spring scores; for every point above the mean their spring scores were predicted to be .62 points higher than the fall.

55

In order to examine whether children’s use of dialect interacted with the treatment affect of improved spring outcomes, DVAR was added to the model at the child level. HLM results showed that the amount of dialect used in the fall (DVAR score) negatively affected children’s gains in LW ability. For every point their DVAR scores were above the mean in the fall, their spring LW was predicted to be .16 points lower on average. Finally, the treatment was not associated with changes in DVAR score effect from fall to spring (p=.228). That is, DVAR did not influence the ability of the treatment to improve LW scores. The final model coefficients for this two level model are presented in Table 23. The two-level model equation (see equation 9) is as follows: Level-1 Model (9)

= 0 + 1*( Fall LW) + 2*(Fall DVAR) + R Level-2 Model

0 = 00 + 01*(School SES) + 02*(Treatment) + U0

1 = 10

2 = 20 + 21*(Treatment)

Table 23: ISI treatment and DVAR interaction (with robust standard errors). The outcome is Spring Letter Word Identification (LW) w score.

Fixed Effects Coefficient Standard Error Approx. df p-value

Intercept2, 00 456.721 1.509 64 0.000 School SES, 01 -0.002 0.032 64 0.947 Treatment, 02 3.549 1.648 64 0.035

Fall WJ3-PC score, 10 0.620 0.022 685 0.000

Fall DVAR, 20 -0.158 0.039 685 0.000 TreatmentXDVAR, 21 0.059 0.048 685 0.228

Random Effects Variance Chi-square Df p-value U0 (Classroom level) 16.189 114.617 64 0.000 R (Child level) 244.301

56

CHAPTER 4

DISCUSSION

The purpose of this study was to investigate the role of non-mainstream American English dialect (NMAE) on children’s language and literacy skills in first grade. This study also aimed to examine Part I of the Diagnostic Evaluation of Language Variation Screening Test (DELV-S) at the level of individual items to examine the utility of this measure as a tool to distinguish MAE from other NMAE variations, including African American English (AAE). Three main findings were the result of this investigation: (1) the DELV-S appears to be adequate in differentiating children who speak MAE from those who speak other varieties of English such as AAE, SoAE and possibly other vernacular forms that are unidentifiable by the screener; (2) Children who speak NMAE in First grade generally use less of the phonological and morphosyntactic features of their dialect at the end of First grade than they did in the fall; and (3) NMAE use negatively relates to children’s growth in language and literacy skills at the end of First grade. I begin this section with a discussion of the DELV-S’s utility and then discuss findings regarding NMAE use based on DELV-S results. Utility of the DELV-S. Although the DELV-S has been recommended and referenced numerous times in the literature, the effectiveness of this screening measure for dialect identification has never been reported in research by an independent investigator. To the extent that the test meets its intended objectives in design and function, it can be said to be successful and useful as a research tool. In fact, the DELV-S has several stated aims of its overall intent. Two main purposes are detailed in the test manual under the heading “Purpose”, and several ancillary goals are delineated in the description of the standardization process, the pilot research process and the item selection process. In particular, the authors declare a primary intent of the measure is to distinguish children who speak dialects of MAE (NAME) from children who speak MAE. The present study used the DELV-S with this main goal in mind. Secondly, the authors state the test is designed to be able to accurately distinguish children with typical language development who speak a dialect of MAE, from children whose language use and comprehension is disordered or delayed, and may or may not be dialect speakers. In other words, the test aims to distinguish “difference from disorder,” referring to the classic, longstanding problem of culturally diverse populations being misidentified for clinical caseloads. Along with these overriding aims are the underlying objectives of

57 identifying items that best differentiate children who speak AAE from children who speak MAE (Seymour et al., 2003, pp. 40), as well as evaluating how speakers of other varieties of American English respond to the test items. With regard to the first objective of the DELV-S to “distinguish children who are speaking Mainstream American English from those who are using a variation from MAE” (pp. 1), it appears the test is successful at identifying some percent of NMAE dialect use. In fact, from the results of Part 1 of the DELV-S, 40% of the children in this sample of First graders were identified as speakers of NMAE dialect as distinct from the remaining 60% of children speaking MAE. Overall, the general characteristics of the children who were identified as NMAE dialect speakers in this sample matched the broad characteristics of children from previous research studies. Namely, that NMAE is observed more among African American children than White children, with greater density of features among boys compared to girls within the White and Other categories; and in particular, that children who are raised in poverty demonstrate more NMAE feature use than do children from higher SES backgrounds (Craig & Washington, 2004b; Labov, 1995; Thompson et al., 2004; Wolfram & Schilling-Estes, 2006). A limitation of the screener is its inability to identify type of dialect used. Instead, the DELV-S is capable only of identifying the degree to which children use NMAE. In fairness, the identification of dialect type is not one of the explicit purposes of the test. However, the item selection and standardization process seem to indicate the test was tacitly intend to identify children who speak AAE apart from MAE. In fact the authors claim one of the standardization objectives is to “identify items that best differentiate children who speak AAE from children who speak MAE…” (pp. 39). Thus, the majority of the standardization sample, from which items were selected, were African American children who spoke AAE and were from predominantly low SES backgrounds (Seymour et al., 2003, pp. 40). The 21 morphosyntactic items and 5 phonologic items on the test were selected because they represented linguistic features of AAE (Seymour et al., 2003, pp.37). In addition, the test items were developed from the language sample of 24 kindergarten-age African American children who were speakers of African American English. It seems, therefore, that the DELV-S is designed to identify AAE speakers from MAE speakers even though the authors claim a broader audience for the distinction of dialect variation. Perhaps the reason Seymour et al. stop short of calling the DELV-S a measure of AAE dialect is because the screener is designed to detect NMAE varieties other than AAE. The third objective of the pilot research was to “evaluate how speakers of other varieties of American English responded to the

58

[test] items” (pp. 37). Specifically, the screening measure was piloted on a small sample (n=80) of children who spoke Appalachian, Cajun, Southern and Hispanic English varieties to determine whether children who speak these dialects would respond to the test items in a similar pattern as children who speak the dialects targeted for the test (i.e. AAE or MAE). Minimal claims can be made about the success of the DELV-S meeting this goal based on the findings from the present investigation. The sample of children examined in this study were likely speakers of more than one NMAE dialect including AAE, SoAE, and possibly even SAAE (see Table 1 for a breakdown of features by dialect). Across all three ethnic groups in the study (African American, White and Other), NMAE features were used. In the fall, 58% of African American children used the reduction of final consonant clusters (e.g. “gift” becomes “gif”), while 24% of White children produced this form. This phonological feature is known to be prominent in both AAE and SoAE (Wolfram & Schilling-Estes, 2006). Similarly, 60% of children classified as Other used third person singular in place of plural [s] in their speech in the fall. Therefore, it is possible that the DELV-S is capable of detecting other varieties of NMAE. It is not clear whether this lack of specificity in identifying type of dialect being used is necessarily a shortcoming of the DELV-S. Rather, it may indicate a need for future research to better identify the quantities of feature use characteristic of different dialects beyond AAE. Plus, it is possible that the features of AAE that lead to over identification of language impairment might be features shared with SoAE and other dialects but, again, more research would be needed. This information, coupled with what the DELV-S accurately reveals about degree of dialect use predominantly within AAE, would allow the test to be maximally informative. On balance, one of the major advantages of the DELV-S is its efficiency at capturing the degree of dialect being used when using the DVAR metric. Compared to the language sampling procedure used in much of the research cited here, the DELV-S is a faster and appears to be a reasonable method for capturing the extent of children’s dialect without the use of extensive resources of time and transcription. Moreover, the DELV-S can be administered to groups of children or individuals in as little as twenty minutes. Some might argue that language samples represent a truer picture of children’s language use due to the tendency of speakers to produce more instances of dialect (specifically AAE) features in a narrative or discourse context. Both Washington, Craig and Kashmaul (1998) and Wolfram and Schilling-Estes, (2006) have noted the frequency and percentage of dialect use observed in a structured context such as a formal language testing situation (i.e. my sample), may reflect less than what students would use in a more natural context (i.e. language sample or oral narrative). However, the

59 results of the present study do not necessarily support this view. Indeed, results in this study suggest that the DELV-S does identify a fairly high percentage of children who use NMAE. This finding is comparable to current research such as Connor and Craig (2006) in which 87% of their sample of 63 African American children used AAE features during an oral narrative task and only 27% did so on a more structured sentence imitation task where “expectations for SAE [standard or MAE] were explicit” (pp. 779). In my sample of 281 African American children, 75% of them used NMAE in a structured, formal testing situation. Finally, one common benchmark for psychometric strength is the degree to which the items in a test actually capture what they intended to capture. In designing the DELV-S, the authors sought to be able to differentiate dialect differences from children with Specific Language Disorder and other clinical language deviations that exhibit morphological deficits as principle characteristics. Thus, they intentionally overrepresented morphosyntactic features that they predicted would elicit NMAE responses from children (Seymour et al., 2003, pp. 30-34). While the identification of children at risk for language disorder is a secondary function of the DELV-S, and the results of this study were able to show children’s risk for language disorder as it related to the use of NMAE, it was beyond the scope of this research to evaluate Part II of the DELV-S. Nevertheless, Part I of the DELV-S did capture morphological forms as intended in this sample of First grade children, just as language sample research has demonstrated. Children in this sample produced all the morphological forms that were targeted. Furthermore, they produced more of the most common morphosyntactic features than phonological forms. And the results are consistent with extant literature on children’s NMAE use. For example, children’s production of third person singular with a plural subject, also known as subject verb agreement (SVA) in the literature, was the most common morphological form used (72% of African American children and 20% of White children). Washington and Craig (2002) also found “subject verb variation” as it was called in their study, to be “particularly prominent” (pp. 215) in their sample of 28 four to seven year olds (i.e. 86-100% of the language samples exhibited this form). Further, this was consistent with their earlier work that found SVA to be one of “the most frequently used forms for African American children from low and middle SES backgrounds…” (pp. 215). Connor and Craig (2006) also found SVA to be one of the features used “most frequently” by their sample of 33 preschool age children. In addition to SVA, final consonant cluster reduction was another feature from the present study using the DELV-S that was also examined by Craig and Washington, (2004a) with similar results. Specifically, 58% of African American First grade children in my investigation omitted the final

60

consonant in “gift” and 24% of White children did so. In Craig and Washington’s sample, approximately 55% of First grade children were observed to use final consonant cluster reduction. Considering the principle of feature variability which states that dialect speakers are not obligated to include all forms in their speech, children had the option to not use any of the morphological forms and all of the phonological forms or vice versa. Based on the previous research and the findings from the present study, the content validity of the DELV-S appears acceptable. In summary, the DELV-S appears to successfully capture similar profiles of dialect use to those observed by other researchers who have investigated AAE features in children. Furthermore, the test is capable of capturing dialect use in a more efficient manner than the resource-heavy method of language sampling used in the extant literature cited here while apparently maintaining a high degree of content validity. Dialect shifting. Children in First grade classrooms decreased the dialect features they used from the beginning of the school year to the end of the school year by nearly a point a month, as measured by DVAR score. This behavior describes an aspect of the phenomenon known as “code switching” or “style shifting as it has also been called by socio-linguists such as Walt Wolfram and Natalie Schilling-Estes (2006). In their book, American English, these authors define the concept of style shifting as speakers exhibiting a “range of speech styles” (pp. 266), and it may occur in three ways. There is the style shifting that occurs when speakers decrease or increase the use of specific features in their own dialect, such as what is seen in this study with First grade children. Generally, by the spring, children in this sample who used NMAE were using fewer of the phonological and morphosyntactic forms of their dialect as captured by the DELV-S (and consequently, more MAE features). There also can be shifting between dialects or registers as when a speaker abandons the use of features in his/her own dialect and adopts the forms and lexicon of another dialect. For example, a mother speaking on the phone to an adult friend uses her adult speech, but when she is interrupted by her toddler who wants her attention, she switches to her “motherese” momentarily, or, as demonstrated by some of the children in the present sample, when a child who speaks NMAE begins to use the MAE of the classroom through daily exposure to the forms. Thirdly, style shifting can occur between genres or particular contexts of use such as a sermon or political speech, or a writer reading a book excerpt. In such cases, the speaker will adopt the particular lexicon, intonation and phrasing appropriate for that situation.

61

What is common among each type of style shifting is that situational context dictates the shift in forms. And more importantly, that metalinguistic awareness is essential to perform a style or code switch. Again, Wolfram and Schilling-Estes (2006) explain that the speaker must possess a level of self- awareness, and “to engage in style shifting, speakers must be aware of the various registers and styles around them, as well as the social meanings associated with each style” (pp. 269). In short, style shifting exemplifies an advanced form of communication, pragmatics, which is the use of language to convey a particular purpose or message and represent oneself as intended through speech. The results of the present investigation suggest that children are style shifting within their own dialect as evidenced by the decrease in features from fall to spring. For example, African American children went from using [f] for final [th] 51.6% of the time in the fall compared to 36.6% of the time in the spring. White children also decreased their use of this same feature from 15.6% in the fall to roughly 9.56% in the spring. However, it is not clear that actual code switching between dialects is taking place. As mentioned previously, the DELV-S is not equipped to identify specific types of dialect used within the categories of “some variation from mainstream American English” or “strong variation from mainstream American English”, only that certain features of a NMAE dialect are being used to a degree. Furthermore, no measure of linguistic awareness was used in this investigation. Therefore, it can not be concluded from this investigation that children are moving between dialects, particularly when many of the same features overlap between different English variations (Green, 2002b; Wolfram & Schilling-Estes, 2006). Claims about children’s awareness of dialect features and ability to shift between dialects must be reserved for future research in which metalinguistic or pragmatic measures can capture these skills. Furthermore, Wolfram and Schilling-Estes (2006) point out that it might not even be possible to determine clearly when an individual is code shifting between language variants because it is possible for a person to claim more than one dialect as their own, and even among linguists, what constitutes a “dialect” is still open for debate. Thus, it is possible style shifting between dialects may be taking place when children decrease the use of dialect features enough to no longer be using NMAE, and to now be classified as MAE. But because children’s linguistic awareness and performance was not tracked in this study, and because there is no specific cutoff within the categories of dialect variation determining precise dialect boundaries, there is no real way of knowing when a child has crossed into a different dialect (e.g., AAE to SoAE). Finally, it is not likely that children in First grade are actually performing between-dialect code shifts because as the third main finding of this study reveals, dialect is exerting a negative influence on language and literacy outcomes.

62

Influence of NMAE dialect on language and literacy skills The third major finding of this investigation was the inverse relationship observed between increased use of NMAE dialect use and First grade children’s growth in language and literacy measures. In other words, the more use of mainstream features and the less use of NMAE, the greater was children’s growth in language and literacy scores. Conversely, children who used more dialect features and thus fewer MAE, demonstrated less growth on language and literacy measures (because DVAR is continuous variable representing dialect on a continuum from heavy use of NMAE at one end, to heavy use of MAE at the other). However, it was also clear that a strong association between school context and children’s use of NMAE existed and, importantly, this association was greater than the association with children’s ethnicity. Moreover, morphological features appeared to negatively impact children’s performance above and beyond the use of phonological forms based on the relationship between the DELV-S features and the language and literacy outcomes in the study. The only subtest unaffected by children’s increased use of NMAE dialect was the semantic knowledge test (i.e., WJ Academic Knowledge), which assesses children’s general semantic and world knowledge. The results here conform to previous studies in which inverse relationships were observed between greater use of dialect, in particular AAE, and poorer performance on word reading, comprehension, vocabulary and phonological processing. Specifically, Charity et al. (2004) and Craig and Washington (2004a) found a significant negative relationship between heavy dialect use and performance on reading achievement tests. In the study by Charity and colleagues, children were administered word reading, nonword decoding, and text comprehension subtests from a standard battery along with a researcher created sentence imitation task. Craig and Washington relied on state tests administered by the children’s schools. Kohler et al. (2007) found a negative relationship between children’s nonword spelling abilities and AAE dialect. Among these studies, a broad picture of children’s reading ability is provided and dialect appears to interact negatively regardless of the type of measure used. The findings from the current investigation support previous research and add to the convergence of data that seems to indicate that greater use of MAE is associated with increased learning of reading skills and performance on reading tasks. The important question is, why? Theoretical implications. Previous research on children’s use of NMAE and its affect on language and literacy skills has focused on three main theories to help explain why children who speak dialects of American English (predominantly African American children speaking AAE) perform worse than do their peers who speak

63

MAE. In particular, the teacher bias theory, the mismatch hypothesis and the linguistic flexibility theory have been put forth to try and explain the relationship between dialect and children’s language and literacy performance. The most recent findings from Charity et al. (2004), Connor and Craig (2006), and Terry et al. (in press) have been mixed in their conclusions about these theoretical implications. Charity and colleagues found possible support for the teacher bias theory among their findings, indicating that differences in children’s scores could be attributable to negative attitudes and actions on the part of teachers toward students who speak a vernacular of American English. In the end though, they acknowledged that they did not measure teachers attitudes about race and dialect use directly, thus firm conclusions could not be drawn. Instead, the strongest support from their findings was for the linguistic mismatch hypothesis which states that incongruity between children’s dialect of upbringing and the dialect of the classroom interfere with literacy acquisition. The children’s reading scores correlated strongly and negatively with both phonological and grammatical features of their dialect. Meanwhile, Connor and Craig did not find support for either the teacher bias or the linguistic mismatch hypothesis due to the non-linear nature of their findings. They point out that for either of these theories to be supported by the findings, there would have to be a negative linear relationship between dialect and achievement. Instead, their research, and newer, as yet unpublished research, seems to support the evolving theory of linguistic flexibility as the most encompassing explanation for the relationship between dialect and academic achievement. Linguistic awareness holds that children’s metalinguistic knowledge and pragmatic awareness are critical factors in the development of reading skills (Terry et al., in press). More specifically, this notion maintains that if children are able to consciously think about their own language skills, and language in general (which is the approximate definition of metalinguistic awareness as referenced in Kahmi and Koenig, 1985), such that they recognize that speech is composed of discrete sounds and meaningful units, they will likely be able to attend to the speech of others and the contexts in which speech acts occur (Scarborough et al., 2007). This attention and awareness will allow them the flexibility necessary to consciously switch between the use of features or entire dialects according to the dictates of the speech environment. What is not clear is whether or not children are consciously aware that they are shifting in their use of NMAE and MAE by context. Accordingly, the term linguistic flexibility connotes the notion of plasticity or adaptability that children must possess in order to eventually be able to code switch and use language in a deliberate, pragmatically aware manner but does not assume any conscious awareness of shift. Thus linguistic flexibility relates to code switching in that

64 one is not possible without the other. They are integrally intertwined. Notably, the theory of linguistic flexibility may be analogous to the constructs defined as code switching/dialect shifting (Craig & Washington, 2004b; Connor & Craig, 2006), dialect awareness (Charity et. al., 2004), and linguistic awareness/flexibility (Scarborough et al., 2007), in the most recent literature. The findings of the latest research propose the linguistic flexibility theory as the most promising explanation for their findings because it accounts for both the linear and complex non-linear relationships observed between language and literacy skills and children’s use of dialect. Connor and Craig (2006) suggest that the preschoolers in their study “appeared to dialect shift between the two contexts and many used SAE features when the expectation for AE was highly explicit” (pp. 781). Similarly, Terry et al. (in press) concluded the teacher bias theory did not account for the different associations they found between dialect and achievement in their sample. Nor did their results entirely support the linguistic mismatch hypothesis. As some of the outcome variables (vocabulary and phonological awareness) were negatively linearly associated with dialect and others (word reading) had a U-shaped relationship the authors determined that linguistic flexibility was a better explanation for their findings. The present study found negative linear relationships between NMAE dialect and the language and literacy gains as seen with the results of question three. On the surface, this seems to suggest the mismatch hypothesis is the most plausible explanation for these findings. But recalling that children exhibited a decrease in the use of NMAE features from fall to spring the linguistic flexibility theory may be a stronger explanation for these results. Because, as we know, linguistic flexibility requires a conscious awareness on the part of the speaker to be able to assess the speaking environment and alter features of speech accordingly, the children in this study are likely still developing this ability, which may account for the negative association observed between dialect and performance while still exhibiting a decrease in feature use. That is, the children in this sample were using all the phonological and morphosyntactic features of NMAE dialect. Both phonological awareness and morphological awareness (MA) are metalinguistic skills. Early research on children’s language development attests that metalinguistic awareness, such as the ability to judge when and which forms to include in one’s own speech, do not develop until the age of 4 at the earliest, and as late as age 7 or 8 (Kahmi & Koenig, 1985). Thus it is possible that First grade children studied here have not yet fully developed the metalinguistic ability to truly recognize the need for code switching in the context of a school environment where MAE is expected as the language of use. It does appear, however, that these skills

65 are emerging because some of the children are style shifting by exhibiting less density of features by the spring of First grade. Since previous research has shown us that density of dialect is negatively correlated with achievement in school, the children who are consciously able to code switch perform better. This study supports those conclusion with the same findings. On the whole, this study adds to the growing literature base that supports children’s linguistic flexibility as the most likely solution to the difficulty encountered by NMAE speaking children learning to read. Further, this study contributes more evidence that linguistic flexibility may help to explain the difference in performance observed between children who speak dialects of MAE and those who speak the vernacular of the classroom. Practical implications The results of this investigation lead to several important implications for educators and researchers. First, teachers and teacher education programs can be made aware of the benefits children acquire from being able to cross back and forth between dialects of their cultural upbringing and the classroom. To practice this code switching, however, children must learn how to pay attention to speech. Wolfram and Schilling-Estes (2006) state it clearly: “the more attention [children] pay to speech, the more formal or standard their speech will become; as they pay less attention to their speech, they become more casual or vernacular” (pp. 271). Therefore, teaching children how to be aware of the features of language, and to “tune in” to the environment of a speaking interaction, essentially teaching children metalinguistic skills, is a necessary first step to children learning how, when and with whom to code switch. Phonics and phonological awareness are already the focus of many school curriculums, but adding to children’s metalinguistic knowledge through the instruction of morphological awareness is critical for the development of linguistic flexibility. Carlisle (2003) and others (Deacon & Kirby, 2004; Lyster, 2002; Nagy, Berninger & Abbott, 2006) propose the early instruction in morphological awareness to children as young as kindergarten, citing the strong association between morphological knowledge and children’s performance on reading related measures. Given that morphological features contributed more variance to the negative outcomes children exhibited in this study, instruction in morphology and the specific grammatical features of both NMAE and MAE dialect appears critical for children’s success in school. Secondly, to combat the affect of potential teacher bias in the classroom it is essential that we increase teachers’ awareness and sensitivity to dialects other than MAE (Fogel & Ehri, 2006). Further, teachers need to know the benefit of bi-dialectism to children. Specifically, it has been observed that

66 children who possess more than one language system, whether that is an independent language or a dialect of a language, have more sophisticated language skills than children who are monolingual or speak only one dialect (Craig & Washington, 1994; Craig & Washington, 2004b). Thus, teachers can be made aware of the importance of children maintaining their dialect of upbringing while being able to decide when they use it, thus empowering children with their language knowledge rather than being disadvantaged by it. Finally, the results reported here support previous conclusions about the importance of the First grade year as a time when children who speak NMAE will be decreasing the use of features in their dialect to accommodate the expectations of the MAE classroom environment (Craig & Washington, 2004a; Connor & Craig, 2006). Therefore, if researchers want to fully understand children’s linguistic flexibility across the grades, it seems imperative to start examining children longitudinally beginning with younger ages, such as preschool, to determine the extent to which they are aware of various linguistic forms in speech. Limitations of present study Although the research reported here both adds to and extends the current literature base on children’s use of dialect and language and literacy growth and development, there are some limitations to the methodology that warrant mention. To begin with, only a randomly selected subsample of children received an extended battery of language and literacy assessment. Thus certain outcome measures were given to more children than others resulting in missing data, which was missing completely at random. For example, the WJ3-Academic Knowledge subtest had 674 scores in the fall whereas the WJ3-Letter-Word Identification had 1104. Thus, there was more power to find effects for certain outcomes. Additionally, although this study was longitudinal, it was essentially correlational and descriptive in nature thus limited causal claims can be made about the effect of dialect on reading outcomes. Specifically, there is a clear association and predictive relationship between them. While not an intended aspect of this study, the possible impact of teacher/examiner ethnicity on children’s use of dialect could not be tested, nor were social and home environment factors considered as part of this investigation. Thus, future research could investigate whether a relationship exists between children’s use of dialect and these demographic factors.

67

Conclusion In summary, the results of this study help build convergence toward understanding the relationship between children’s use of NMAE and their language and literacy development and achievement. Specifically, the findings show that African American children are the predominant users of NMAE in a sample of mixed ethnicity, and First grade may be a critical year for children’s language development inasmuch as bi-dialectal children are capable of dialect shifting at this age. Furthermore, a negative linear relationship exists between dialect and achievement gains. That is, not only do children who use more NMAE features show weaker concurrent language and literacy skills in first grade, NMAE use is also associated with weaker gains in these skills, with the exception of semantic knowledge. Rigorous experimental investigations are needed now to test the ability of children who speak dialect and the impact of instruction on literacy achievement. Notably, in this study, NMAE use did not mediate the effect of the ISI intervention. Nor did the ISI have any effect on children’s dialect use. However, there was no explicit focus on dialect use in the intervention protocol. Recent research has shown that phonological awareness instruction positively affects reading outcomes for children who speak NMAE (Jeynes, 2008). However, based on classroom observations, both treatment and control teachers spent appreciable amounts of time in phonological awareness instruction. It may be that studies that explicitly teach the phonological and morphosyntactic features of both MAE and NMAE and the concept of pragmatics within these domains could advance our understanding of how dialect shifting occurs and whether it is subject to manipulation. To this point, it appears that children acquire their awareness about language through exposure and embedded instruction on language use and function, though the possibility of increasing children’s academic performance through explicit instruction is too great to ignore these questions. Furthermore, linguists had previously begun experimenting with dialect readers in which the lexicon and specific forms of NMAE dialect (particularly AAE) were included in the text and used with children who would recognize themselves, their culture and their language in the writing. The theory behind this concept is seen today in the work of reader response criticism and culturally sensitive instruction and literature where children are observed to perform better when they can extrapolate ideas from the text that relate to, and can be integrated into their own personal experience (Brooks, 2006). Labov (1995) and others (Wolfram & Schilling-Estes, 2006) saw the benefits of culturally conscious texts as well, which led to the development of programs such as Bridge discussed earlier. Though this

68 effort was abandoned after the Ebonics controversy in the 1970s, contemporary scholars reserve judgment on this approach until further research can assess its value. Curiosity and even respect for this method of language and literacy learning for children from diverse cultural backgrounds remains in the current research climate, however (Wolfram & Schilling-Estes, 2006), and new research tenuously offers support for the theory behind it. With continued focused effort understanding the nature of literacy development and the influence, both positive and negative, of speaking dialects diverse from MAE, it is possible to close the gap in achievement and promote equality in education for all children.

69

APPENDIX A

PARENT CONSENT FORM

70

71

APPENDIX B

TEACHER CONSENT FORM

72

73

APPENDIX C

HUMAN SUBJECT APPROVAL

74

75

REFERENCES

American Speech-Language-Hearing Association. (2003). American English Dialects [Technical Report]. Available from www.asha.org/policy.

Anderson, B.L. (2002). Dialect leveling and /ai/ among African American Detroiters. Journal of Sociolinguistics, 6(1), 89-98.

Brackenbury, T. & Pye, C. (2005). Semantic Deficits in Children With Language Impairments: Issues for Clinical Assessment. Language, Speech, and Hearing Services in Schools, 36, 5–16.

Brooks, W. (2006). Reading representations of themselves: Urban youth use culture and African American textual features to develop literary understandings. Reading Research Quarterly, 41(3), 372-392.

Burton, V. J., & Watkins, R. V. (2007). Measuring word learning: Dynamic versus static assessment of kindergarten vocabulary. Journal of Communication Disorders, 40(5), 335-356.

Carlisle, J.F. (2003). Morphology matters in learning to read: A commentary. Reading Psychology, 24, 291-322.

Catts, H.W., Fey, M.E., Zhang, X., & Tomblin, J.B. (1999). Language basis of reading and reading disabilities: Evidence from a longitudinal investigation. Scientific Studies of Reading, 3 (4), 331- 361.

Champion, T. B., Hyter, Y. D., McCabe, A., & Bland-Stewart, L. M. (2003). "A matter of vocabulary": performances of low-income African American Head Start children on the Peabody Picture Vocabulary Test-III. Communication Disorders Quarterly, 24(3), 121-127.

Charity, A. H. (2008). African American English: An overview. Perspectives on Communication Disorders and Sciences in Culturally and Linguistically Diverse Populations, 15, 33-42.

Charity, A., Scarborough, H., & Griffin, D. (2004). Familiarity with school English in African American children and its relation to early reading achievement. Child Development, 75(5), 1340–1356.

Chatterji, M. (2006). Reading achievement gaps, correlates, and moderators of early reading achievement: Evidence from the Early Childhood Longitudinal Study (ECLS) kindergarten to first grade sample. Journal of Educational Psychology, 98(3), 489-507.

Connor, C. M. (2008). Language and literacy connections for children who are African American. Perspectives on Communication Disorders and Sciences in Culturally and Linguistically Diverse Populations, 15, 43-53.

76

Connor, C., & Craig, H. (2006). African American preschoolers’ language, emergent literacy skills, and use of African American English: A complex relation. Journal of Speech, Language, and Hearing Research, 49, 771-792.

Connor, C.M., Jakobsons, L.J., Crowe, E.C., & Granger Meadows, J. (2009). Instruction, student engagement, and reading skill growth in Reading First classrooms.

Connor, C.M., Morrison, F.J., Fishman, B.J., Schatschneider, C., & Underwood, P. (2007). THE EARLY YEARS: Algorithm-guided individualized instruction. Science, 315(5811), 464-465.

Connor, C.M., Morrison, F.J., & Underwood, P. (2007). A second chance in second grade: The independent and cumulative impact of first- and second-grade reading instruction and students’ letter-word reading skill growth. Scientific Studies of Reading, 11(3), 199-233.

Connor, C. M., Son, S., Hindman, A., & Morrison, F. J. (2005). Teacher qualifications, classroom practices, family characteristics and preschool experience: Complex effects on first graders' vocabulary and early reading outcomes. Journal of School Psychology, 43, 343-375.

Craig, H. K. (2008). Effective language instruction for African American children. In S. B. Neuman (Ed.), Educating the other America: Top experts tackle poverty, literacy, and achievement in our schools (pp. 163–185). Baltimore: Brookes.

Craig, H., Connor, C, & Washington, J. (2003). Early positive predictors of later reading comprehension for African American Students. Language, Speech, and Hearing Services in Schools, 34, 31-43.

Craig, H., Thompson, C.A., Washington, J.A., & Potter, S.L. (2003). Phonological features of child African American English. Journal of Speech, Language, and Hearing Research, 46, 623-635.

Craig, H. K., & Washington, J. A. (1994). The Complex Syntax Skills of Poor, Urban, African- American Preschoolers at School Entry. Language, Speech, and Hearing Services in Schools, 25(3), 181-190.

Craig, H. K., & Washington, J. A. (2000). An Assessment Battery for Identifying Language Impairments in African American Children. Journal of Speech, Language, and Hearing Research, 43(2), 366-379.

Craig, H.K., & Washington, J.A., (2002). Oral Language Expectations for African American Preschoolers and Kindergartners. American Journal of Speech-Language Pathology, 11, 59–70.

Craig, H.K., & Washington, J.A. (2004a). Grade related changes in the production of African American English. Journal of Speech, Language and Hearing Research, 47, 450-463.

77

Craig, H.K., & Washington, J.A. (2004b). Language variation and literacy learning. In C.A. Stone, E.R. Silliman, B.J. Ehren, & K. Apel (Eds.). Handbook of language and literacy: Development and disorders. New York, NY: Guilford Publications.

Craig, H.K., & Washington, J.A. (2006). Malik goes to school: Examining the language skills of African American students from preschool-5th grade. Mahwah, NJ: Lawrence Erlbaum Associates.

Cross, J.B., DeVaney, T., & Jones, G. (2001). Pre-service teacher attitudes toward differing dialects. Linguistics and Education, 12(4), 211-227.

Cutting, L. E., & Scarborough, H. S. (2006). Prediction of Reading Comprehension: Relative Contributions of Word Recognition, Language Proficiency, and Other Cognitive Skills Can Depend on How Comprehension Is Measured [Article], Scientific Studies of Reading: Lawrence Erlbaum Associates.

Darling-Hammond, L. (2004). The color line in American education: Race, resources, and student achievement. W. E. B. DuBois Review: Social Science Research on Race, 1(2), 213–246.

Darling-Hammond, L. (2007). The flat earth and education: How America’s commitment to equity will determine our future. Educational Researcher, 36)6), 318-334.

Darling-Hammond, L., Holtzman, D., Gatlin, S. J., & Heilig, J. V. (2005). Does teacher preparation matter? Evidence about teacher certification, Teach for America, and teacher effectiveness. Education Policy Analysis Archives, 13(42).

Deacon, H.S. and Kirby, J.R. (2004). Morphological awareness: Just “more phonological”? The roles of morphological and phonological awareness in reading development, Applied Psycholinguistics, 25, 223-238.

Dunn, L., & Dunn, L. (1981). The Peabody Picture Vocabulary Test-Revised. Circle Pines, MN: American Guidance Service.

Entwisle, D.R., & Alexander, K.L. (1993). Entry into school: The beginning school transition and educational stratification in the United States, Annual Review of Sociology, 19, 401-23.

Fishback, P.V., & Baskin, J.H. (1991). Narrowing the Black-White gap in child literacy in 1910: The roles of school inputs and family inputs. The Review of Economics and Statistics, 73, 725–728.

Flowers, L. A. (2007). Recommendations for Research to Improve Reading Achievement for African American Students. Reading Research Quarterly, 42(3), 424-428.

Fogel, H., & Ehri, L. C. (2006). Teaching African American English Forms to Standard American English-Speaking Teachers: Effects on Acquisition, Attitudes, and Responses to Student Use. Journal of Teacher Education, 57(5), 464-480.

78

Foorman, B.R., Fletcher, J.M., Francis, D.J., Schatschneider, C. (1998). The Role of Instruction in Learning to Read: Preventing Reading Failure in At-Risk Children. Journal of Educational Psychology, 90(1),37-55

Goodman, K.S., & Buck, C. (1973). Dialect barriers to reading comprehension revisited. The Reading Teacher, 50(6), 454-459.

Green, L. (2002a). A descriptive study of African American English: research in linguistics and education. International Journal of Qualitative Studies in Education (QSE): Routledge.

Green, L. J. (2002b). African American English: A linguistic introduction. Cambridge, : Cambridge University Press.

Gutierriez-Clellen, V.F., & Simion-Cereijido, G. (2007). The discriminant accuracy of a grammatical measure with Latino English-speaking children. Journal of Speech Language and Hearing Research, 50, 968-981.

Hagoort, P., Hald, L., Bastiaansen, M. & Petersson, K.M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304, 438-441.

Haycock, K. (2001). Closing the Achievement Gap. Educational Leadership, 58(6), 6.

Hoover, W.A., & Gough, P.B. (1990). The simple view of reading. Reading and Writing: An Interdisciplinary Journal, 2, 127-160.

Horton-Ikard, R., & Ellis Weismer, S. (2007). A preliminary examination of vocabulary and word learning in african american toddlers from middle and low socioeconomic status homes. American Journal of Speech-Language Pathology, 16(4), 381-392.

Horton-Ikard, R., & Miller, J. F. (2004). It is not just the poor kids: the use of AAE forms by African- American school-aged children from middle SES communities. Journal of Communication Disorders, 37(6), 467-487.

Jencks, C., & Phillips, M. (1998). The Black-White test score gap. Washington, DC: Brookings Institute.

Jeynes, W. H. (2008). A Meta-Analysis of the Relationship Between Phonics Instruction and Minority Elementary School Student Academic Achievement. Education And Urban Society, 40(2), 151- 166.

Juel, C., & Minden-Cupp, C. (2000). Learning to read words: Linguistic units and instructional strategies. Reading Research Quarterly, 35, 458-492.

Kamhi, A.G. & Koenig, L.A. (1985). Metalinguistic awareness in normal and language-disordered children. Language, Speech, and Hearing Services in Schools, 16, 199-210.

79

Keenan, J.M., Betjemann, R.S., & Olson, R.K. (2008). Reading comprehension tests vary in the skills they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading,12(3), 281-300.

Kohler, C.T., Bahr, R.H., Silliman, E.R., Bryant, J.B., Apel, K. & Wilkinson, L.C. (2007). African American English dialect and performance on nonword spelling and phonemic awareness tasks. American Journal of Speech-Language Pathologyy, 16, 157-168.

Labov, W. (1969). The study of nonstandard English. Champaign, IL: National Council of Teachers of English.

Labov, W. (1995). Can reading failure be reversed: A linguistic approach to the question. In V. L. Gadsden & D.A. Wagner (Eds.), Literacy among African-American youth: Issues in learning, teaching, and schooling (pp. 39-68). Cresskill, NJ: Hampton Press, Inc.

Ladson-Billings, G. (2006). From the Achievement Gap to the Education Debt: Understanding Achievement in U.S. Schools. Educational Researcher, 35(7), 3-12.

Lee, J. (2002). Racial and ethnic achievement gap trends: Reversing the progress toward equity? Educational Researcher, 31(1), 3-12.

Lindo, E. J. (2006). The African American Presence in Reading Intervention Experiments. Remedial and Special Education, 27(3), 148-153.

Lyster, S-A H. (2002). The effects of morphological versus phonological awareness training in kindergarten on reading development. Reading and Writing: An Interdisciplinary Journal, 15, 261- 294.

McLoyd, V.C. (1998). Socioeconomic disadvantage and child development. American Psychologist, 53(2), 185-204.

Nagy, W., Berninger, V.W. and Abbott, R.D. (2006). Contributions of morphology beyond phonology to literacy outcomes of upper elementary and middle-school students, Journal of Educational Psychology, 98(1), 134-147.

National Center for Educational Statistics. (2007). The nation’s report card: Reading Highlights 2007 (NCES Publication No. 2007 496). Washington, DC: Author.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington,, DC: Author.

Neuman, S.B. (2008). Educating the other America: Top experts tackle poverty, literacy, and achievement in our schools. Baltimore, MD: Brookes Publishing.

80

Newcomer, P.L., & Hammill, D.D. (1997). Test of language development-Primary (3rd ed.). Austin, TX: Pro-Ed.

Oetting, J. B., & Cleveland, L. H. (2006). The clinical utility of nonword repetition for children living in the rural south of the US. Clinical Linguistics & Phonetics, 20(7-8), 553-561.

Oetting, J. & Garrity, A.W. (2006). Variation within dialects: A case of Cajun/Creole influence within child SAAE and SWE. Journal of Speech, Language, and Hearing Research, 49, 16-26.

Oetting, J. B., & McDonald, J. L. (2001). Nonmainstream Dialect Use and Specific Language Impairment. Journal of Speech, Language, and Hearing Research, 44(1), 207-223.

Oetting, J. B., & McDonald, J. L. (2002). Methods for Characterizing Participants' Nonmainstream Dialect Use in Child Language Research. Journal of Speech, Language, and Hearing Research, 45(3), 505-518.

Oetting, J. B., & Newkirk, B. L. (2008). Subject Relatives by Children with and without SLI across Different Dialects of English. Clinical Linguistics & Phonetics, 22(2), 111 - 125.

Oetting, J. B., & Pruitt, S. (2005). Southern African-American English use across groups. Journal of Multilingual Communication Disorders, 3(2), 136 - 144.

Oller, J. W., Jr., Kim, K., & Choe, Y. (2000). Testing verbal (language) and non-verbal abilities in language minorities: a socio-educational problem in historical perspective. Language Testing, 17(3), 341-360.

RAND Reading Study Grouup. (2002). Reading for understanding: Toward an R & D programm in reading comprehension. Santa Monica, CA: RAND.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (second ed.). Thousand Oaks, CA: Sage.

Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., Congdon, R., & du Toit, M. (2004). HLM6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International.

Rickford, J.R. (2004). African American English and other vernaculars in education: A topic-coded bibliography. Journal of English Linguistics, 32(3), 230-320.

Roberts, J.A. & Scott, K.A. (2006). The simple view of reading: Assessment and intervention. Topics in Language Disorders, 26(2), 127-143.

Scarborough, H. S., Terry, N. P., & Griffin, D. M. (2007). Addressing dialect differences: Advances in policy, research, and practice. Paper presented at the Annual Convention for the American Speech Language Hearing Association.

Seyfried, S. F. (1998). Academic achievement of African American preadolescents:

81

The influence of teacher perceptions. American Journal of Community Psychology, 26(3), 381–402.

Seymour, H., Roeper, T., & de Villiers, J. (2003). Diagnostic Evaluation of Language Variation- Screening Test. San Antonio, TX: Harcourt Assessment, Inc.

Sirin, S. R. (2005). Socioeconomic Status and Academic Achievement: A Meta-Analytic Review of Research. Review of Educational Research, 75(3), 417-453.

Snow, C.E., Porche, M.V., Tabors, P.O., & Harris, S.R. (2007). Is literacy enough? Baltimore, MD: Brookes Publishing Co.

Spaulding, T.J., Plante, E., & Farinella, K.A. (2006). Eligibility criteria for language impairment: Is the low end of normal always appropriate? Language, Speech, and Hearing Services in Schools, 37, 61-72.

Strickland, D.S. (2001). Early intervention for African American children considered to be at risk. In S.B. Neuman & D.K. Dickinson (Eds.). Handbook of Early Literacy Research: New York, Guilford Press.

Sutton, A., & Soderstrom, I. (1999). Predicting elementary and secondary school achievement with school-related and demographic factors. Journal of Educational Research, 92(6), 330–338.

Tenenbaum, H.R., & Ruck, M.D. (2007). Are teachers’ expectations different for racial minority than for European American students? A meta-analysis. Journal of Educational Psychology, 99(2), 253-273.

Terry, N. P. (2006). Relations between dialect variation, grammar, and early spelling skills. Reading and Writing, 19 (9), 907-931.

Terry, N.P., Connor, C.M., Thomas-Tate, S., & Love, M.R. (in press). Examining relationships among dialect variation, literacy skills, and school context in first grade. Journal of Speech, Language, and Hearing Research, Terry, N. P., & Scarborough, H. S. (in review). Dialect variation and early reading: relations to phonological knowledge and linguistic awareness.

Thomas-Tate, S., Washington, J., Craig, H., & Packard, M. (2006). Performance of African American Preschool and Kindergarten Students on the Expressive Vocabulary Test. Language, Speech, and Hearing Services in Schools, 37(2), 143-149.

Thomas-Tate, S., Washington, J., & Edwards, J. (2004). Standardized Assessment of Phonological Awareness Skills in Low-Income African American First Graders. American Journal of Speech Language Pathology, 13(2), 182-190.

82

Thompson, C.A., Craig, H.K., & Washington, J.A. (2004). Variable production of African American English across oracy and literacy contexts. Language, Speech, and Hearing Services in Schools, 35, 269-282.

U.S. Census Bureau. (2006). Poverty: 2006 highlights. Retrieved February 26, 2009, from http://www.census.gov/hhes/www/poverty/poverty06/pov06hi.html.

Washington, J.A. (2001). Early literacy skills in African-American children: Research considerations. Learning Disabilities Research & Practice, 16(4), 213-221.

Washington, J., Craig, H. & Kashmaul, A. (1998). Variable use of African American English across two language sampling contexts. Journal of Speech, Language, and Hearing Research, 41, 1115- 1124.

Wiig, E., Secord, W., & Semel, E. (1992). Clinical Evaluation of Language Fundamentals-Preschool. San Antonio, TX: The Psychological Corp.

Willis, A. I. (2008). Reading comprehension research and testing in the U.S.: Undercurrents of race, class and power in the struggle for meaning. New York, NY: Lawrence Erlbaum Associates.

Woodcock, R., McGrew, K., & Mather, N. (2001). Woodcock-Johnson Tests of Achievement, 3rd Edition. Rolling Meadows, IL: Riverside Publishing.

Wolfram, W. (2003). Enclave dialect communities in the South. In S.J. Nagle & S.L. Sanders (Eds.). English in the . Cambridge, UK: Cambridge University Press.

Wolfram, W. & Schilling-Estes, N. (2006). American English (2nd Edition). Malden, MA: Blackwell.

Wright, L. (2003). Eight grammatical features of southern United States speech present in early modern London prison narratives. In S.J. Nagle & S.L. Sanders (Eds.). English in the Southern United States. Cambridge, UK: Cambridge University Press.

Zephir, F. (1999). Challenges for multicultural education: Sociolinguistic parallels between African American English and Haitian Creole. Journal of Multilingual and Multicultural Development, 20(2), 134-154.

83

BIOGRAPHICAL SKETCH

Catherine Conlin received her B.A. degree from Colgate University and her M.S. degree in Speech- Language Pathology from Emerson College. She completed her Ph.D. at The Florida State University under the direction of Carol Connor and Howard Goldstein. Catherine has professional experience treating children and young adults with language and reading disabilities and has presented at the national level with her colleagues from Florida State. Her primary research interests include Language and literacy development, non-mainstream dialect use and children’s academic skills development, reading comprehension development and disorders and the relation between language, literacy and mathematics. Catherine lives with her husband and son in Boulder, Colorado.

84