Monitoring and Evaluation at Room to Read: Concepts and Practice in an International Education Organization
Total Page:16
File Type:pdf, Size:1020Kb
Monitoring and Evaluation at Room to Read: Concepts and Practice in an International Education Organization November 2010 Michael Wallace Rebecca Dorman Peter Cooper Monitoring and Evaluation (M&E) at Room to Read: Concepts and Practice in an International Education Organization Introduction Room to Read is a nonprofit organization committed to transforming the lives of children in developing countries by focusing on literacy and gender equality in education. Working in collaboration with local communities, partner organizations and governments, we develop literacy skills and a habit of reading among primary school children, and support girls to complete secondary school with the relevant life skills to succeed in school and beyond. In 2000, Room to Read began working with rural communities in Nepal to build schools and establish libraries, and now we are working in nine countries—seven in Asia and two in Africa. We have four core programs: Reading Room. We establish school libraries and stock them with local-language children’s books, original Room to Read titles, donated English-language books, games and furniture to create a child-friendly learning environment. (9,696 libraries established) School Room. We partner with local communities to build schools so children can learn in a safe, child-friendly environment. (1,129 schools constructed) Local Language Publishing. We source new content from local writers and illustrators and publish high-quality children's books in the local language to distribute throughout our networks. (553 titles, 4.2 million books published) Girls' Education. We provide long-term, holistic support enabling girls to pursue and complete their secondary education. (10,042 girl scholars supported) Our recently completed strategic plan is transforming our four core programs into two “pillars”: literacy and girls’ education. The theory of change for our new strategic direction is reflected in “the house” (see Appendix 1), which rests on the foundation of our activities, contains our literacy and girls’ education outcomes, and reaches toward our long-term impact of independent readers, skilled secondary graduates, and educated children. M&E: People and Systems The M&E Team has staff members at the Global Office in San Francisco, Regional Offices in Asia and Africa, and individual Country Offices. The Global Office leads activities related to our program indicators (common measures across all projects) and cross-national studies and evaluations. The Regional Office leads in supporting country-level research, monitoring, and evaluation activities. The Country Offices lead the field-level implementation of both monitoring and evaluation activities. Our monitoring system is based on program-specific conceptual frameworks that include goals, objectives, and indicators (see Appendix 2). Each program has an overall goal—such as promoting literacy and the habit of reading in children—that is elaborated in program objectives, and which we measure using our program indicators, which are collected on an annual basis for all of our active projects. 1 Our evaluation efforts include both cross-national evaluations and studies that are initiated and managed by the Global Office and in-country evaluations and studies that are initiated and led by our individual Country Offices. An example of the former is our school library cross-national evaluation, which is designed around the following research questions: 1. What is the impact of our school library program on students’ reading habits and attitudes towards reading? 2. How do different school and student background characteristics (teacher attitudes towards reading, the existence of a reading curriculum in schools, students’ home languages, parents’ education and attitudes towards reading) influence the effect of our program on students’ reading habits and attitudes towards reading? The cross-national evaluation began in 2009 in Laos, Nepal, and Zambia, and expanded this year to include India, Sri Lanka, and South Africa. An example of an in-country effort is our study of girl dropouts in Cambodia, which has the highest dropout rate of any of our countries. This study will examine the factors that lead girls to leave school early and assess changes in their lives after leaving school. Lessons Learned Key lessons learned in M&E thus far include the following: Monitoring: o “What gets measured gets done.” This is perhaps best illustrated by one of our program indicators, “Percentage of libraries in which school personnel received training.” This indicator captures training in library services for librarians, teachers, and administrators. In 2008 this indicator was 67 percent. This result was communicated to our Country Offices, and training was identified as an area needing improvement. As a result of increased attention to training, this indicator increased to 92 percent in 2009. o Push ownership/responsibility/accountability down. As our use of monitoring data increases, our need for quality data at all levels of the organization also increases. Quality control must start at the Country Office level, where the data is collected and input into our project database. We formally designated the most senior M&E Officer in each country as the person responsible for high quality data and timely reporting. The Global Office communicates directly with this person for all matters related to program indicators. This has streamlined communication and resulted in increased awareness of the importance of program indicators, as well as greater ownership of the quality of this data. o Use past data to inform future program targets. Each country develops an annual plan that includes program implementation targets. For 2011, program indicators became part of this planning process, and our 2008 and 2009 indicator results provided a basis for 2011 targets. 2 Evaluation: o Collect only data that will be used. We had 56 program indicators in 2008, and reduced that to 46 in 2009. Going forward, we will eliminate indicators that are not useful for program planning and improvement, and add indicators to measure our evolving program initiatives. In 2011, for example, we are adding an indicator for book leveling (labeling and organizing books according to reading level) to give us more information than our book classification indicator currently provides. o Link recommendations to data analysis. In 2007, we conducted in-country evaluations of our school library and local language publishing programs. These evaluations, designed and carried out by our Country Offices, varied in topic and rigor. We worked with our Country M&E and Program Teams to ensure that the recommendations were directly linked to the evaluation analysis. For example, one country found that the library training was too complicated, and recommendations were made to tailor the training to be more appropriate for the audience. o Ensure feedback to program implementation. To help ensure that the 2007 in-country evaluations led to program improvements, we followed up with the Country Teams to identify the specific program actions that were taken to implement the recommendations of these evaluations. Challenges Ahead Challenges in monitoring are mostly related to our evolving program directions, which require ever-evolving metrics to measure progress toward program objectives. For example, our school library program has moved from establishing libraries to improving reading skills, and we have evolved from training librarians to becoming involved in the reading curriculum and classroom teaching. Our metrics—the indicators that we use to assess progress and results—are evolving from inputs and outputs to outcomes and impacts. The level of measurement is moving from schools to students, and from populations to samples. We need to measure progress and change at the final beneficiary level (students), not just at the intermediate output level (schools and libraries), and to do that we can no longer expect to collect all indicators for the entire population. We can check all of the schools where we have libraries, but not all of the students at those schools. Overall, we are moving from (easy) counting to (difficult) measuring. Our greatest challenges in evaluation are related to balancing the concerns that define evaluation design and implementation. What should the main purpose of an evaluation be—to improve programs or to demonstrate results? Should the main focus of an evaluation be process or impact? We must demonstrate results to donors, who are asking increasingly sophisticated questions about outcomes, and want to see demonstrated results in terms of beneficiary change. They want improved reading attitudes and habits and skills, not just more libraries established and more books printed and delivered and more librarians trained. Our first cross-national evaluation, of our school library program, focuses on results and impact. 3 Should the evaluator be internal or external? An external evaluator may be more objective than an internal one, and capacity constraints may prohibit internal staff from conducting a rigorous evaluation. However, an external evaluator can never know exactly what is most important to an organization, or why, and as a result the direction of an external evaluation can easily get at least a little off track. How should internal capacity building be valued and/or incorporated into evaluation design? It is almost impossible for an external evaluator to conduct an evaluation without logistical and/or