
Hot Topics in Reading Science Commentary What Does Science Say About Orton-Gillingham Interventions? An Explanation and Commentary on the Stevens et al. (2021) Meta-Analysis by Emily Solari, Yaacov Petscher, and Colby Hall recent meta-analysis published in Exceptional Children (Stevens et al., 2021) looked at the Aeffects of Orton-Gillingham (OG) reading interventions on reading outcomes for students who have word reading diffi culties. The results of the study, which showed no statistically signifi cant effect but a practically important effect size (explained further below), have led to questions and lively conversation among practitioners and reading researchers. One of the things that is important about science is that it is constantly evolving: this is true in education science as much as it is in the health sciences. Because this journal is committed to translating empirical fi ndings from reading research in order to make education science accessible to practitioners, the intent of this commentary is to provide a clear description of the fi ndings reported in this recent meta-analysis, addressing the degree to which they align with those reported in similar reviews of OG interventions. We discuss the degree to which the fi ndings represent an evolution of reading science and their implications for instructional practice, policy, and future research. What is a Meta-Analysis? a meta-analysis, research teams must make The purpose of a meta-analysis is to system- decisions about which studies to include and atically combine and analyze data from pub- which to exclude based on a predetermined lished research studies to better understand set of rules or criteria. For example, they may what that body of research says about a partic- require studies to use certain types of rigor- ular question. Meta-analyses of intervention re- ous research designs (such as experimental or search studies combine results from multiple, quasi-experimental designs and not case study individual research studies in order to provide designs), or to include measures of particular a sense of the overall strength of intervention types of reading outcomes (such as measures effects. When multiple studies are combined, of word reading outcomes). This means that there is potential to better inform the fi eld— not all studies that have researched a particular because individual research studies have fl aws. intervention will be included in a meta-analy- Meta-analyses can enable us to fi nd a “signal” sis. Researchers often begin a meta-analysis (in other words, true information about in- intending to answer a new set of questions, tervention effects) amid the “noise” (in other ones that the individual studies could not an- words, the random, unwanted fl uctuation in swer. When individual studies are combined, study results that refl ects study fl aws or idio- the nuances of the studies are lost in favor of syncrasies rather than truly refl ecting interven- answering these new questions. Meta-analysis tion effects). We can place more trust in fi nd- also does not answer all the possible questions ings when a larger body of research is studied about how well a specifi c intervention or in- together, especially when rigorous statistical structional approach works. designs and methods are used to calculate av- erage intervention effects across studies. What Did Stevens et al. Do in Their Meta- That said, as with all scientifi c approach- Analysis? es and methodologies, there are some lim- The research team examined the effects of in- itations to meta-analysis. When conducting terventions tested in 16 studies that met their 46 The Reading League Journal 46-51_RLJ_Special Section_Solari_CP.indd 46 4/22/21 8:29 AM pre-established inclusion criteria: meta-ana- lyzed studies were studies of small-group, OG Very simply, statistical interventions that only targeted foundational reading skills. The inclusion criteria also required signifi cance tells us whether a that the population of students in each study result from a statistical analysis had word reading diffi culties. The majority of is due to chance, and practical the interventions studied by the authors were branded as OG (that is, Alphabetic Phonics, importance tells us whether that Barton Reading and Spelling System, Dyslexia result is meaningful enough to Training Program, Fundations, Herman Meth- act on in some way (for example, od, Language!, Lindamood Bell, Project Assist, recommend an intervention, do Project Read, Recipe for Reading, Slingerland Approach, the Spalding Method, S.P.I.R.E., more research, etc.). Starting Over, Take Flight, Wilson Reading Sys- tem, The Writing Road to Reading); a smaller number were described as being based on OG example, recommend an intervention, do more principles. The authors used meta-analysis to research, etc.). Statistical signifi cance is infl u- estimate the average effect of OG interventions enced by many factors, including the number across these 16 studies and the extent to which of subjects who participated in the study, the the quality of the research design used in the number of variables in the statistical analysis, studies in the year when the studies were pub- and what kind of statistical analysis is being lished was related to the strength of the effect. conducted. Practical importance is a way to go beyond statistical signifi cance to say some- What Did the Authors Report? thing more about the result. The authors found that on average across all the studies included in the meta-analysis, students An average effect size of 0.22 is, practically with word-level reading diffi culties who received speaking, a meaningful one. To put this effect OG interventions did not make statistically sig- size in perspective, Russell Gersten et al. (2020) nifi cant improvements in foundational skills, recently meta-analyzed 33 rigorous studies of vocabulary, or comprehension outcomes when reading interventions with students with or at compared to groups who did not receive OG risk for reading diffi culties in Grades 1-3 and interventions. Descriptively speaking, students found a signifi cant positive effect on read- across all studies who received OG interventions ing outcomes with a mean effect size of 0.39. did have higher mean scores at posttest than Jeanne Wanzek et al. (2018) also determined their peers who did not receive the OG inter- that extensive reading interventions for stu- ventions. Authors calculated the average size of dents with or at risk for reading diffi culties in the difference in effects (between students who Grades K-3 produced signifi cant positive ef- received OG and those who did not) and report- fects on reading outcomes, with an average ed that students who received OG interventions effect size of 0.37. Both studies interpreted ef- had scores that were higher by 0.32 of a stan- fects of this size as representing meaningful dard deviation in foundational reading skills and improvement in reading outcomes as a result by 0.14 of a standard deviation for vocabulary of reading intervention. and comprehension outcomes. What Do the Results Mean? Statistical Practical importance is a way to Signifi cance vs. Practical Importance Although the authors found no statistically go beyond statistical signifi cance signifi cant effect of OG interventions, they did to say something more about the report an effect size of 0.22, a value that many result. scientists would classify as “small but mean- ingful.” Because of this seemingly confl icting information, it is important to understand the difference between statistical signifi cance and The lack of statistically signifi cant fi ndings practical importance. Very simply, statistical from the Stevens et al. meta-analysis is generally signifi cance tells us whether a result from a consistent with a previous systematic review of statistical analysis is due to chance, and prac- 12 OG intervention studies conducted by Ritchey tical importance tells us whether that result is and Goeke (2006). The authors descriptively meaningful enough to act on in some way (for summarized results and reported that “the fi nd- MAYJUNE 2021 47 46-51_RLJ_Special Section_Solari_CP.indd 47 5/20/21 9:10 AM ings were not, however, all positive in favor of OG instructional programs. Nor were the fi ndings The one component of OG [all] statistically signifi cant” (p. 181) in favor of ei- approaches for which there is ther OG or the alternative instructional program to which OG was compared. These fi ndings are less research evidence is the also consistent with published reports from the kinesthetic/tactile instructional What Works Clearinghouse (WWC), an agency component, often called the within the U.S. Department of Education that independently reviewed several studies of both “multisensory” component. There branded and unbranded OG programs. The is little research to suggest this WWC found that the evidence in favor of OG component adds value to explicit programs is limited, either due to a mix of pos- itive and negative effects or, more frequently, and systematic phonics-based because available studies of such programs do instruction. not meet WWC quality standards (for example, WWC, 2010a; 2010b; 2010c; 2010d). What Cannot Be Concluded From These explicit and systematic phonics-based instruc- Results? tion, it is plausible that the mean effect size of Stevens et al. (2021) were very cautious about 0.32 is rooted in the delivery of this kind of in- interpreting their fi ndings. They concluded by struction that has been shown to work for stu- observing that: dents with reading diffi culties. The one compo- the fi ndings from this meta-analysis do not nent of OG approaches for which there is less provide defi nitive evidence that OG inter- research evidence is the kinesthetic/tactile in- ventions signifi cantly improve the reading structional component, often called the “mul- outcomes of students with or at risk for tisensory” component (Al Otaiba et al., 2018).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-