
Assessing the Impact of Technology in Teaching and Learning A Sourcebook for Evaluators Editors Jerome Johnston and Linda Toms Barker Published By Institute for Social Research • University of Michigan Jerome Johnston and Linda Toms Barker Assessing the Impact of Technology in Teaching and Learning: A Sourcebook for Evaluators Published By Institute for Social Research • University of Michigan With Funding From Learning Technologies Division Office of Educational Research and Improvement U.S. Department of Education © 2002 Regents of the University of Michigan First Printing: April, 2002 The preparation of this book was underwritten by the U. S. Department of Education, Office of Educational Research and Improvement, Learning Technologies Division. The contents do not necessarily represent the positions or policies of the funder and you should not assume endorsement by the Federal Government. ii Table of Contents Foreword .............................................................................................................vi Introduction...........................................................................................................1 1. Learner Outcomes in the Cognitive Domain ...................................................9 2. Learner Outcomes in the Affective Domain ..................................................35 3. Learner Outcomes in Adult Education...........................................................67 4. Teacher Outcomes: Changed Pedagogy......................................................87 5. Teacher Outcomes: Improved Technology Skills ......................................119 6. Technology Integration ................................................................................139 7. The Evaluation of Dissemination Efforts .....................................................161 8. Evaluator Contact Information .....................................................................179 9. Measurement Appendix...............................................................................183 iii Foreword In 1994 Congress passed the Improving America's Schools Act and a subsequent appropriation that included $45 million to enhance the use of educational technology in American schools. In the ensuing five years Congress steadily increased the funding for technology until, in 1999, the appropriation was $765 million. As they increased funding for technology, Congress asked for better evidence that the investment was having a measurable impact on America's students. Within the Department of Education, OERI was responsible for three programs that accounted for about one-third of the total technology appropriation: Star Schools, Technology Innovation Challenge Grants, and Regional Technology in Education Consortia. By the year 2000 OERI was monitoring 130 grantees. With each new competition for funds in these program areas, OERI increased the emphasis on evaluation and refined the evaluation requirements in ways that would insure that OERI would secure the information required by Congress when the projects were finished. More than just accountability, OERI wanted to inform the nation about how best to use technology in the schools. OERI recognized that increasing the evaluation requirements on paper would not automatically yield better evaluations. A consensus needed to be built among grantees, evaluators, and OERI program monitors about what good evaluations looked like. Among evaluators there was little agreement about what constitutes effective research designs or how various concepts (e.g., technology integration, changed pedagogy, learner achievement) should be measured. To address this problem, Cheryl Garnette, Director of the Learning Technologies Division of OERI, saw a solution in convening evaluators for an extended retreat where they could learn about the Department's evaluation requirements and share with each other insights about how best to meet the requirements. More than a sharing, she saw the importance of providing opportunities for evaluators to enhance their methodological skills so they could gather the evidence needed by the Division. With these goals in mind, Garnette authorized the first Technology Evaluation Institute in 1999. Held at the University of Michigan, it lasted three days and included 26 sessions on topics ranging from causal mapping to improving the wording of survey measures. Each session was led by an expert in the evaluation community assigned to review the topic, identify best practices, and lead a discussion around the key points. v Foreword The first Institute was a success and provided valuable lessons for all who attended, but there was no documentation of the lessons learned. For the second Institute, held the following summer, a book was planned well in advance. Authors were identified well ahead of time who would take the responsibility for reviewing a key topic area (including reviewing measures used in the evaluations of many of OERI's funded projects), present their review at the Institute, and then revise their review to reflect the comments and insights of those who attended their session. This Sourcebook is the result. It is designed to be a resource for individuals who are conducting evaluations of technology projects. It includes concepts, strategies, and ideas to stimulate the reader’s mind about how rigorous evaluation activities, applied early in the process of a project’s development, can lead to useful results about student outcomes (including achievement), teacher practices and behaviors, and school climate when technology is implemented effectively. In spite of the wealth of knowledge that the Sourcebook provides and the experience upon which it is based, it does not serve as the single authority on evaluation strategies to assess technology. It does, however, contribute to the growing knowledge base that serves to inform the education community and helps to link what is learned from evaluation with educational practice. The book has broad implications beyond the review of federally funded projects, although much of the information reported herein was gleaned from efforts supported in whole or in part by the Department of Education. It is intended specifically for use by the community of evaluators and educators who are concerned with assessing the role of technology in American education. vi Introduction Linda Toms Barker and Jerome Johnston Berkeley Policy Associates • ISR, University of Michigan Since 1989 the U.S. Department of Education has invested close to a billion dollars in experiments to find compelling uses of technology in public education. The rationale has varied from simply preparing students to function in a technology-rich society to improving instruction of traditional school subjects. If the Department’s experiments are going to provide lessons for educators, careful evaluation of each experiment is required. This sourcebook is a resource for the community of evaluators involved in evaluating the more than 100 projects funded by the Star Schools and the Technology Innovation Challenge Grants (TICG). The sourcebook provides an overview of measurement issues in seven areas as well as examples of measures used in current projects. Although designed to address the needs of evaluators of Star Schools and Technology Innovation Challenge Grants it will be of value to the broader community of evaluators concerned with assessing the role of technology in American education. Background Given that these technology projects represent a substantial financial investment, it is imperative that OERI be able to report about their success. In the earliest years of the projects, most data being reported were limited to the extent of implementation efforts, with little focus on outcomes or the design features that are associated with success. In 1998 OERI began an effort to enhance the quality of evaluative information coming out of Star Schools and Technology Innovation Challenge Grant projects. A Technology Evaluation Institute was conducted in the summer of 1999 to bring evaluators together from around the country in a formal forum for sharing ideas and experiences in evaluating the success of the grant projects. The institute was so successful that OERI decided to implement another one in the summer of 2000. At the first Technology Evaluation Institute, one of the most well-received sessions was the Instrument Exchange, in which evaluators were encouraged to bring copies of their evaluation tools to share with other evaluators. Participants went home from the institute with large packets of instruments designed to measure anything from 1 Introduction students’ technology skills to changes in pedagogy. While this exchange activity was extremely popular and many evaluators reported finding valuable ideas from these examples, this collection of instruments lacked specific context, description, and information about how they were being used. So for the following Institute it was decided to replace this exchange activity with more careful reviews of current practice in evaluating various aspects of these educational technology projects, and the curriculum for the Institute evolved into a more in-depth examination of current practices in the evaluation of technology use in schools. Thus for the second Technology Evaluation Institute, evaluators were asked to submit example instruments ahead of time so that for each of a number of different measurement areas, a presenter could present background information and describe the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages191 Page
-
File Size-