1

EVALUATION LESSONS LEARNED: OK TEACHER ED COLLABORATIVE

Stacey M. Clettenberg The University of Tulsa [email protected]

The experiences of the Oklahoma Teacher Education Collaborative (O-TEC) evaluation team provide important lessons which may help guide future evaluation designs. Specifically, it is important to be flexible and responsive to changes required within an evaluation plan, particularly during evaluation staffing and methodological changes. Additionally, it is essential that evaluation strategies are proposed that take your organization’s culture into account. Otherwise, it is difficult to decide how to improve a project, plus cultural barriers that are overcome can indicate project success as much as identified indicators. Likewise, it is imperative to decide upon pertinent definitions which guide evaluation questions, although the danger lies in becoming too rigid too early in the process, leaving out important elements that would add to the evaluation findings. On the other hand, in the implementation of the project, if these issues are not resolved, then findings can be weakened. Another foundational element is to determine a systematic and comprehensive participant tracking plan. For O-TEC, in hindsight, it might have been better, at the beginning of the grant, to have selected a cadre of teacher preparation students from each O-TEC institution to monitor and follow over the course of the project rather than trying to track participants after the fact. Finally, cluster evaluation, in which similar projects are grouped together to share and strengthen evidence of change, allows for collaborative efforts among evaluators, which helped to strengthen the O-TEC project.

Oklahoma Teacher Education Collaborative (O-TEC): A Brief Overview In Pursuit of Quality Teaching: Five Key Strategies for Policymakers, a publication of The Education Commission of the States (2000), highlights several strategies to support the improvement of teaching in the nation’s public schools [1]. Strategy 1, “Ensure a diverse and high-quality approach to teacher preparation that involves solid K-12/postsecondary partnerships, strong field experience and good support for new teachers”, shares its purpose with that of the Oklahoma Teacher Education Collaborative (O-TEC). In an attempt to address documented national and Oklahoma needs in mathematics and science education, O-TEC was created and charged with developing effective strategies for the development of future teachers. The 5 year initiative, directed by Robert Howard, involved a statewide network of 12 institutions of higher education, 2 community colleges, and myriad professional organizations and public schools. The generalized guiding philosophy of O-TEC was that inquiry-based reform-oriented teaching of science and mathematics should be field based; exploratory; rich in technology; seamless among mathematics, science and technology; and network forming. To accomplish these goals, several activities and initiatives were developed, including: (a) summer academies – designed to increase pre-service and in-service teachers’ experience with and ability to use hands-on inquiry-based teaching methods and 2

to improve statewide efforts to recruit a highly qualified and diverse array of individuals to careers in mathematics and science education; (b) curriculum and pedagogical reform - led by Master Teachers In Residence (MTIR’s) to emphasize hands-on and inquiry-based pedagogical methods at the higher education institutions (MTIR’s were a group of K-12 science and mathematics teachers with inquiry-based instructional experience, each assigned to a particular institution to coordinate the pedagogical reform and collaboration efforts of the institutions); (c) collaboration – by individuals and groups within the same institution and by different educational institutions to transform teacher preparation in Oklahoma, including support for pre-service, new, and in-service teachers; and (d) changes in the classroom – classroom observations of higher education faculty and in-service common ed teachers.

The evaluation efforts for this initiative focused on three research questions: 1.As a result of O-TEC, are university faculty members incorporating reform, inquiry-based teaching methods into their classrooms? 2. As a result of O-TEC, are teacher preparation students learning the content they need to meet discipline standards? 3. As a result of O-TEC, are teacher preparation students using reform, inquiry-based teaching methods when they become teachers? Implementation issues were addressed during the first 3 years of the grant, while outcome measures were instigated, at the recommendation of the NSF visiting committee assigned to O-TEC, in the final 2 years. The evaluation team underwent changes over the 5 years of the grant, losing one of the two original evaluators during Year 3 and the other mid-Year 4, adding 2 new evaluators at the beginning of Year 4, losing the original graduate students at the end of Year 4, and adding 2 new graduate students in Year 5. The University of Minnesota CETP Core Evaluation group became a guiding force for O-TEC during Year 4 of the grant. Within this framework, several evaluation issues will be discussed, including lessons learned that can inform future evaluation efforts.

Responsive Evaluation The W.K. Kellogg Foundation, in its Evaluation Handbook (1997), discusses flexibility regarding the way evaluation projects are designed, implemented, and modified: Therefore, evaluation approaches must not be rigid and prescriptive, or it will be difficult to document the incremental, complex, and often subtle changes that occur over the life of an initiative. Instead, evaluation plans should take an emergent approach, adapting and adjusting to the needs of an evolving and complex project [2].

Within the O-TEC project, this flexibility and responsiveness to change was necessitated by the change in the evaluation team over time, the critical change in the focus of the evaluation from formative to summative, and the blending of O-TEC “home-grown” measures of effectiveness with the resources being developed by the CETP Core Evaluation group. Throughout this project, responsiveness to changes in the project, both administrative and programmatic, required receptivity toward new ideas, new directions, 3 and new people, which, in turn, led to a stronger, more complete evaluation. Again, as stated in the Kellogg Foundation Evaluation Handbook, “The evaluation effort should leave an organization stronger and more able to use such an evaluation when outside support ends”( p.2). Therefore, consistency within an evaluation plan is beneficial and desirable, but positive responsiveness to changes within a project should be paramount. Organizational Culture and Evaluation When using the scientific method in evaluation, the idea is to strip away the context within which variables operate, thus leading to generalizable results. Conversely, in the O-TEC project, organizational culture/context was a rich part of the project and impacted implementation and outcomes directly. According to Steven Hodas (1993), “Organizations are defined by their lines of flow of power, information, and authority. Schools as workplaces are hierarchical in the extreme, with a pyramidal structure of power, privilege, and access to information” [3]. In the O-TEC project, the hierarchical progression within the university faculty led to a resistance, in some, to work with the MTIR’s (K-12 experienced teachers). MTIR efforts to guide the inquiry-based pedagogical reforms were, at times, difficult and sometimes totally refused by university professors. Steven Hodas writes regarding the culture of refusal:

I believe further that the depth of the resistance is generally and in broad strokes proportionate to the seriousness of the actual threat…The more innovative the approach the greater its critique, and hence its threat to existing principles and order. When confronted with this challenge, workers have two responses from which to choose. They can ignore or subvert implementation of the change or they can coopt or repurpose it to support their existing practices [3].

Several of the higher education teacher preparation faculty members either were not interested in changing the way they taught students in teacher preparation classes, or they did not consider the MTIR’s to hold the same status or proper credentials to be aiding the reform effort, or they felt threatened by the changes required. All of the responses suggested by Steven Hodas were articulated over the years of the grant by various members of teacher preparation faculty. In reality, during the grant, those faculty members who granted access to their classes and who were amenable to learning and changing were given resources and opportunities from the grant that were not taken advantage of by other faculty members. Consequently, instead of a unified, systematic change within and among departments, reform-based pedagogical changes were focused in pockets of opportunity and access. Thus, understanding and assessing your organization’s culture in the planning/proposal stage of the grant can help grantees project the probability of success for specific elements of the program or, at the least, to project strategies that will take organizational culture into account. “Without such information, it will be difficult to make informed decisions about how to improve your project. Furthermore, if environmental barriers to project implementation are understood, seemingly troubled projects might be deemed successful based on the barriers they overcame” [2].

4

Operational Definitions Operational definitions explain, for a given concept in a specific research situation, how that concept is to be measured. Good operational definitions can make the research process more efficient and also can improve the accuracy of communication about the research to others [4]. An unexpected finding of the O-TEC project was that the majority of science and mathematics education professionals maintain a personal definition of the term, “inquiry-based teaching”. The Master Teachers in Residence met as a group during the first year of the project to operationalize the concept, but remained frustrated by the lack of consensus by others who used (or didn’t use) their definition. Even the NSF visiting committee assigned to O-TEC failed to reach an agreement as to the definition of “inquiry-based teaching”. Additionally, defining differences in the treatment and control groups was an issue to be resolved during the project. In other words, what made a person O-TEC (reform) trained or not reform trained? This issue was exacerbated due to the inconsistent use of inquiry-based methodology within departments of teacher preparation and complicated by the similar in-service opportunities offered in non O-TEC schools. Eventually, a final decision was made concerning the operational definition of inquiry- based instruction and how many of what kind of classes were required to be O-TEC trained. Sounds like common sense to operationally define terms, but in a responsive, flexible project evaluation, the danger lies in becoming too rigid too early in the process, leaving out important elements that would add to the evaluation findings. On the other hand, in the implementation of the project, if these issues are not resolved, then findings can be weakened.

Tracking Participants Judy Howard, Professor of Clinical Pediatrics at the Los Angeles School of Medicine [5], writes about the recruitment and retention issues she encountered during a study of chemically dependent women and their children. In her report, she cites a conversation she had with S. Cohen (1972) concerning a UCLA study in which subjects were tracked, through a variety of means, over a 12 year period: Maintaining contact with the subjects involved a tremendous amount of effort on the part of research staff members. However, it is the staff’s impression that the intensive early intervention provided the foundation for the ongoing willingness of subject families to continue with a 10-year schedule of testing procedures [6].

In the O-TEC project, due to the number of institutions involved and the nature of the comparison desired (O-TEC trained teacher versus non O-TEC trained teacher), retention for research purposes became quite a challenge. Part of the problem related back to the lack of a clear definition of a reformed teacher preparation class versus a non reformed class. In the early days of the grant, making a clear delineation between “traditional” classes and “reform” classes, and then linking teacher preparation students to those particular classes was a multifaceted task that required time to develop. As a result, later attempts to identify teacher preparation students who had taken particular classes and then find those students proved difficult. In hindsight, it might have been better, at the beginning of the grant, to have selected a cadre of teacher preparation students from each O-TEC institution and from 5 each comparison non O-TEC institution to monitor and follow over the course of the project. Choosing a cadre of students at the beginning of the project may have constrained the size of the sample to a certain extent, but would have made follow-up contact easier. As Cohen points out, had we made specific and frequent contact with our selected cadre members, tracking them from teacher preparation programs into the teaching field might have been more systematic and comprehensive than trying to track teacher preparation students after the fact.

Cluster Evaluation Cluster evaluation involves grouping similar projects in clusters to bring about a stronger evidence of change than one project by itself can generate. Cluster project personnel get together periodically to discuss issues pertinent to grant implementation and outcomes. The Kellogg Foundation Evaluation Handbook (1997) addresses cluster evaluation in this way: In general, we use the information collected through cluster evaluation to enhance the effectiveness of grantmaking, clarify the strategies of major programming initiatives, and inform public policy debates…Cluster evaluation focuses on progress made toward achieving the broad goals of the programming initiative. In short, cluster evaluation looks across a group of projects to identify common threads and themes that, having cross-confirmation, take on greater significance [2].

In the O-TEC project, beginning in the 3rd year of the grant, we were able to collaborate with fellow CETP evaluators through the University of Minnesota CETP Core Evaluation group. Representatives from O-TEC attended yearly core evaluation meetings and yearly CETP conferences, during which the core evaluation group provided evaluation support for the projects, validation of survey and observation instruments, training, and data compilation. Although several evaluation instruments had been generated and validated for use in the O-TEC grant before participation in the cluster evaluation group, O-TEC was responsive and flexible in blending evaluation elements from the core group with O-TEC evaluation elements. The availability of this cluster evaluation was particularly useful to the new evaluation personnel and helped to focus, in particular, the summative portion of the O-TEC evaluation. The interaction of professionals from the various projects helped to strengthen not only the projects as a group, but individual projects, such as O-TEC, as well. New ways to evaluate programs were shared, along with resourceful ways to compile and share information. Collaboration with fellow evaluators is highly recommended, even if only on an informal basis. Beyond the support provided to individual projects, evaluation ideas and data generated by cluster evaluation can help policymakers and stakeholders make effective and informed decisions regarding future programming and policy directions.

Acknowledgements O-TEC is funded by a grant from the National Science Foundation DUE- 0119918; Project Director: Dr. Robert Howard, The University of Tulsa. O-TEC 6

Evaluation Team Members: Stacey Clettenberg, Dale Johnson, Curtis Miller, Mary Stewart, Joyce Townsend, Andrea Diedrich.

Bio Stacey M. Clettenberg is director of Clettenberg Educational Enterprises, an independent evaluation firm based in Houston, Texas. She has evaluated numerous NSF science, technology, and mathematics projects, as well as projects for the Department of Justice (COPS) and for several Eisenhower grants.

References [1] Education Commission of the States (2000). In pursuit of quality teaching: Five key strategies for policymakers. Denver, CO: ECS Distribution Center. [2] Millett, R.A., Curnan, S., LaCava, L., Sharpsteen, D., Lelle, M., & Reece M. (1997). W.H. Kellogg Evaluation Handbook. Battle Creek, MI: W.K.Kellogg Foundation. [3] Hodas, S. (1993). Technology refusal and the organizational culture of schools. Education Policy Analysis Archives. [4] City College of NY Psychology Department (1998). Research methods module: Operational definitions. Department of Education’s Minority Science Improvement Program #P120A70044-97 and 98. [5] Howard, J. (1992). Subject recruitment and retention issues in longitudinal research involving substance-abusing families: A clinical services context. NIDA Research Monograph; 117:155-165. [6] Cohen, S., Parmelee, A.H., Beckwith, L., and Sigman, M (1992). Biological and social precursors of 12 year competence in children born preterm. In Greenbaum, C., and Auerbach, J., Eds. Cross national perspectives in children born at risk. Norwood, NJ: Ablex Publishing Corporation. .