Thinking Through Text Comprehension I: Room

Thinking Through Text Comprehension I: Room

THE BEHAVIOR ANALYST TODAY VOL. 13, NO. 1, 2012 ISSN: 1555-7855 Building the Case for Large Scale Behavioral Education Adoptions Zachary R. Layng & T. V. Joe Layng Mimio–Headsprout Abstract Behaviorally-designed educational programs are often based on a research tradition that is not widely understood by potential users of the programs. Though the data may be sound and the prediction of outcomes for individual learners quite good, those advocating adoption of behaviorally-designed educational programs may need to do more in order to convince school districts to adopt large-scale implementations of their programs. This article provides an example of a successful approach that suggests quickly gathering local evidence using more familiar evidence-based and experiential methods may increase the likelihood of program adoption. The actual report to the large urban district is included. Keywords summative, formative, evidence, pilot programs orking with school districts requires the ability to quick- lations involved by Leon, Layng, & Sota (2011). The methods ly adapt to the needs of each district and to provide as Headsprout employs in the design, development, and testing of Wmuch useful information as possible to enable them to all its programs has been described by Layng, Twyman, & Stike- make decisions. Districts feel most comfortable when they can leather (2003), Twyman et al (2004), and Layng, Stikeleather, review evidence of program effectiveness obtained from their and Twyman (2006). own schools. Accordingly, it is necessary to be able to quick- Program evaluation can be of two kinds, formative and sum- ly respond to requests and to provide data that may not meet mative. In formative evaluation, criteria are established for “gold standard” requirements, but that do help provide a basis learner performance. The program, or component thereof, is for sound decision making. This article provides a case study of tested to determine if learners reach the specified criteria. If such an effort, and may help to elucidate how in-district data learner behavior does not meet those criteria, the program is re- may be quickly generated and combined with other evidence to vised and retested until nearly all criteria are met (Layng et al, strengthen the case for program adoption. 2006; Markle, 1967; Twyman et al, 2004). In summative evalua- The program being considered for a district-wide summer tion, evidence is gathered after the program has been developed school adoption was Headsprout® Reading Comprehension in an attempt to determine program effectiveness. Summative (HRC). HRC is an online program designed to directly teach evaluation often employs pretest versus posttest comparisons or children how to comprehend text. It provides instruction in lit- group comparisons of various types. eral, inferential, main idea, and derived word meaning compre- As noted by Layng et al (2006): hension, as well a vocabulary instruction using stimulus–equiv- Whereas group designs, typically the basis for summative alence like procedures. Learners interact with the 50 episode evaluation, are readily accepted as providing scientific evi- (lesson) program for about 30 minutes a day. As with all Head- dence for program effectiveness, single subject designs typi- sprout programs, HRC underwent extensive user testing during cally form the basis for formative evaluation. While both the course of its development, more than 120 learners, one at a group and single subject designs are descended from highly time, interacted with the various program segments, providing successful scientific traditions, and both may provide equal- data on their effectiveness and occasions for revisions. HRC, first ly rigorous and informative results, single subject design is its components and then the entire program, was revised and relatively less understood. Both do, however, differ in the retested until nearly all learners met criterion. As of this writ- questions asked; one asks about the behavior of groups, the ing, more than 35,000 online learners provided, and continue to other asks about the behavior of individuals. provide, data for further evaluation and revision. An overview of HRC and its development can be found in Leon et al (2011). Layng et al (2006) go on to describe a 3 X 3 matrix that il- A detailed description is provided of the contingency analytic lustrates the relation between evidence types found in both for- foundations of HRC by Layng, Sota, & Leon (2011), of the analy- mative and summative evaluation, and the results of their inter- sis that determined its content by Sota, Leon, and Layng (2011), section. See Table. The columns comprise types of summative and of the programming and testing of the repertoires and re- evaluation, and the rows types of formative evaluation. For sum- mative evaluation, the columns are: A. Experiential Assessment, The authors would like to thank Melinda Sota for her critical comments and editorial B. Evidence-Based Assessment, and C. Scientific Control Group contribution. Research & Assessment. For formative evaluation the rows are: 41 42 LAYNG & LAYNG Table. 3X3 Matrix. The level of rigor for each type of evaluation is indicated by the letters A – C for summative evaluation, with column C representing the most rigorous; the numbers 1– 3 indicate the level of rigor for each type of formative evaluation, with row 3 representing the most rigorous. Cell 3–C represents the most rigorous intersection of formative & summative evaluation. Approaches to Summative Evaluation: Basis for Outcomes Assessment Approaches to A. Experiential – Assessment B. Evidence Based – Assessment C. Scientific – Controlled Group Formative Evaluation: Research & Assessment Basis for Program Revision 1. Experiential – Cannot predict group or individual Provides some indication that the Can confidently predict group Program Development performance. program may be effective with a group; performance, but Works or not with groups or individuals but Cannot predict individual’s performance purely subjective; a matter of opinion; Cannot confidently predict group or (Sidman, 1960). argued on point of view –a matter of individual performance. If works or not, not clear what program social agreement elements, alone or together are responsible. 2. Evidence Based – If limited tryouts, may indicate that the Provides some indication that the Can confidently predict group Program Development program might work with those tested; program may be effective with a group; performance; but but but Cannot confidently predict individual’s Cannot confidently predict group or Cannot confidently predict group or performance. individual performance. individual performance. If works or not, not clear what Still primarily a matter of social If works, not really clear why, if it does program elements, alone or together agreement, but has some validity by not work, can lead to re-evaluation of are responsible, but can lead to relation to past research and perhaps principles or the way they were applied. reconsideration of principles or the way limited tryouts. Not clear where the problem is. they were applied. 3. Scientific – Program Able to predict group performance based Able to predict group performance based Can confidently predict group Development on individual performance; and on individual performance; and performance; and Can confidently predict individual’s Can confidently predict individual’s Can confidently predict individual’s performance. performance. performance. Since program able to predict individual’s If doesn’t work, issues are in transfer, If doesn’t work, issues are in differences performance, some prediction of group --able to identify and isolate variables to in formative criteria & summative performance implied; may have some change and revise for retest. Individual measurement instruments – able to validity by relation to past research. data not lost and can be analyzed in identify and isolate variables to change relation to outcome and revise criteria & program for retest, or to revise summative measurement instruments. Individual data not lost and can be analyzed in relation to outcome. 1. Experiential– Program Development, 2. Evidence Based– duration efforts, often a pilot program that falls into columns Program Development, and 3. Scientific– Program Develop- one and two of the 3 X 3 matrix, can be used to suggest the ment. The intersection of each cell describes the implications potential effectiveness of an educational product in a particular for programmatic decision–making. setting. When such data are obtained with a program that has Since HRC was developed within a rigorous scientific devel- gone through a scientific formative development process, even opment process (see Twyman et al, 2004) we are able to predict more confidence may be warranted than would otherwise be group performance based on individual performance (see Ta- provided by either formative or summative data alone. When ble, Row 3). For example, Layng, Twyman & Stikeleather (2004) experiential and evidence based information confirms the sci- were able to show that data obtained from single subject re- entific program development data (e.g., Twyman, Layng, & search in the Headsprout laboratory were nearly identical to the Layng, 2011), potential purchasers feel even more comfortable data obtained for about 32,000 learners accessing the program over the Internet in uncontrolled settings. Accordingly, any ad- making large purchase decisions. When those data come from ditional summative

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us