What We Know About Joint Evaluations of Humanitarian Action

Learning from NGO Experiences

Version 6: April 2011

Contents

ACKNOWLEDGEMENTS ...... 3 ABOUT THIS BOOKLET ...... 4 THE GUIDE ...... 5 CHAPTER 1: WHY DO A JOINT EVALUATION? ...... 5 The Benefits of a Joint Evaluation ...... 5 The Downsides of a Joint Evaluation ...... 6 Conclusion ...... 7 CHAPTER 2: JOINT EVALUATION – WHEN, WHO AND HOW? ...... 8 When will it take place? ...... 8 Who will take part in it? ...... 8 Is there enough time for a joint evaluation? ...... 8 How will it be paid for? ...... 9 How can the joint evaluation be most useful to various stakeholders? ...... 9 CHAPTER 3: WHAT TO DO BEFORE THE EVALUATION ...... 10 Choose a lead agency and agree on roles ...... 10 Set up a management structure ...... 10 Estimate costs and duration ...... 11 Communicate what the evaluation is about ...... 12 Find a competent administrator/manager ...... 12 Carefully pick evaluation team members ...... 12 Choose a few objectives to cover ...... 14 Agree on evaluation standards and methods ...... 14 Write an inception report ...... 15 Manage communications within the collaboration ...... 15 Prepare, prepare, prepare! ...... 15 CHAPTER 4: WHAT TO DO DURING THE EVALUATION ...... 16 Brief the team upon arrival ...... 16 Share findings as you go ...... 16 Ensure findings are reported with sensitivity ...... 16 Finalizing the Evaluation Report ...... 17 CHAPTER 5: WHAT TO DO AFTER THE EVALUATION ...... 18 Develop both collective and individual rollout plans ...... 18 Emphasize peer accountability ...... 18 CHAPTER 6: JOINT EVALUATIONS IN REAL TIME ...... 19 Prepare for the evaluation before the emergency starts ...... 19 Take a “good enough” approach to the evaluation ...... 19 Call on additional resources ...... 19 Consider some other joint reflection process ...... 20 THE STORIES ...... 21 ...... 21 Guatemala ...... 24 ...... 27 THE TOOLS ...... 30 Sample Terms of Reference for a Joint Evaluation ...... 30 Sample Terms of Reference for Evaluation Team Members ...... 31 Sample Agreements Document ...... 35 Joint Evaluation Readiness Checklist ...... 37

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 2

Suggested Topics for Discussion with Prospective Partner Agencies ...... 39 References and Further Reading ...... 40

ACKNOWLEDGEMENTS

Many people have shared their valuable experience and time in the creation of this booklet. Special thanks go to all of them, particularly, in Guatemala, Carla Aguilar of Save the Children US, Hugh Aprile of Catholic Relief Services, Borys Chinchilla of , and Juan Manuel Giron Durini of the ECB Project; in Niger, Jasmine Bates and Marianna Hensley of Catholic Relief Services, and Julianna White, of CARE; in Indonesia, Adhong Ramadhan and Josephine Wijiastuti of Catholic Relief Services, Agus Budiarto and Evi Esaly Kaban of Save the Children, Harining Mardjuki and Anwar Hadipriyanto of CARE, and Richardus Indra Gunawan and Yacobus Runtuwene of Vision International. Special thanks also go to John Wilding, Pauline Wilson, John Telford, Maurice Herson of ALNAP, Jock Baker of CARE and Guy Sharrock of Catholic Relief Services who have given critical input into this work. Malaika Wright was the author of the first paper.

The April 2011 version of the booklet was updated by Katy Love from the ECB Project, Loretta Ishida of Catholic Relief Services, Jock Baker of CARE, Hana Crowe of Save the Children, and Kevin Savage of World Vision. The booklet was revised based on feedback and reports from those who participated in ECB‐ supported joint evaluations in 2010 in Indonesia, Haiti, the Horn of Africa, and Niger. These people served as evaluation managers and coordinators, team leaders, team members, Steering Committee members, ECB field facilitators, and ECB accountability Advisers in joint evaluations, including: Paul O’Hagan, Greg Brady, Yves‐Laurent Regis, Angela Rouse, Katy Love, and Jock Baker (Haiti); Yenni Suryani, Pauline Wilson, Loretta Ishida, and LeAnn Hager (Indonesia); Kevin Savage, Chele DeGruccio, Jim Ashman, and Wynn Flaten (Horn of Africa); and Kadida Mambo (Niger). The ECB Project thanks all who contributed to this work.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 3

ABOUT THIS BOOKLET

This booklet was written to share knowledge gained from the experiences of people that have been involved in joint evaluations conducted by non‐governmental organizations (NGOs). It mainly profiles the work of NGOs involved in the Emergency Capacity Building Project (ECB), which has a goal to improve the speed, quality and effectiveness with which the humanitarian community saves lives, improves the welfare, and protects the rights of women, men and children affected by emergencies.

This booklet also draws on the lessons of multi‐agency evaluations that already exist within the humanitarian sector. Major contributions have come, in particular, from the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP).

We hope that learning from previous experiences captured here will be useful for all those considering leading their agencies through a joint evaluation. The learning shared here is targeted at evaluation practitioners, managers, and NGOs contemplating a joint evaluation. Additionally, we hope that it will contribute to a growing body of knowledge on these processes and show that while there are many unanswered questions about joint evaluations, there is a lot we already know.

The Guide section of the booklet can be referred to as a ‘how‐to’ for those closely involved in joint evaluations. It provides a framework for those approaching an interagency evaluation. The Stories section shares several case studies from the ECB Project’s experiences. The Tools section includes many templates and tools that can be adapted for evaluations, including sample terms of references, agreement documents, and checklists.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 4

THE GUIDE CHAPTER 1: WHY DO A JOINT EVALUATION?

► Why should an agency consider taking part in a joint evaluation of an emergency response program? 1 After all, joint evaluations require collaboration, collaboration means more work and time, and time is a scarce commodity in emergency programs.

In recent years, several NGOs have sought to answer this question while taking part in the joint evaluations profiled in this book. While the results have been mixed and the learning curves have been steep, joint evaluations confer many benefits. While the evaluations themselves have yielded instructive and useful findings, agencies have also benefited significantly from the quality of the interactions that took place among peer agencies. Joint evaluations often serve as forums for ongoing learning, dialogue and even begin collaboration.

Agencies also inevitably learn that there are some pitfalls in the process of conducting joint evaluations. Though a joint evaluation is not so different from a single‐agency evaluation, there are some major differences, some of which are highlighted below and addressed throughout this guide. Above all, like a single‐agency evaluation, a joint evaluation provides an opportunity to learn from past action so as to improve future decision‐making.

It should be noted that this guide sets out the ideal processes and structures for a joint evaluation. In an emergency setting, of course, constraints emerge that make the ideal process a challenge to achieve. Evaluators, therefore, must be flexible and willing to adapt to the realities on the ground in order to achieve some—if not all—of the objectives they set out to achieve.2

The Benefits of a Joint Evaluation 1. Seeing the Big Picture One evaluator said, “You [may] think you’ve covered the world but you’ve only covered one village in ten.” Emergency responses typically involve several humanitarian actors. When the responses of more than one actor are put side‐by‐side and examined, the overall picture becomes clearer, revealing how factors such as geographic coverage, sector‐specific interventions, and community involvement all fit together. Joint evaluations go further towards measuring impact by looking at the collective efforts of several actors to meet beneficiary needs and to identify what gaps exist.

2. Building Coordination and Collaboration to Improve Response Given the scale of disasters and the disproportionate amount of suffering they cause, agencies working alone are generally not able to have a large impact. In fact, agencies that coordinate responses and work together during emergencies are better able to meet the needs of disaster affected populations. By comparing agencies’ responses side by side, joint evaluations are better able to point out areas where NGOs could have acted in a complementary fashion and make

1 The model and definition of joint evaluations used in this booklet is any evaluation that looks at the work of more than one agency. This usually means that in addition to more actors being involved, there is a greater breadth of programming being examined. 2 Readers looking for further guidance should review “Shoestring Evaluation: Designing Impact Evaluations Under Budget, Time, and Data Constraints” by Bamberger, Rugh, Church, and Fort.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 5

recommendations for how they could do so in anticipation of the next emergency. Evaluation reports repeatedly show that better coordination would have led to a more effective response.

In some cases where agencies are already working together, a joint evaluation can be a ‘logical conclusion’ to a joint action or response. In Indonesia and in Niger in 2010, agencies agreed to conduct a joint evaluation to assess the impact of their joint activities.3

The process of collaborating on the evaluation itself can also be a powerful way of building relationships among partner agency staff that endure for the long term. In the ECB experience, some of these relationships have led to ongoing activities and even the formation of an NGO coordination forum (see Niger). In Haiti, the joint evaluation helped to build relationships among national staff and managers, serving as a starting point for longer term inter‐agency collaboration.

3. Wielding Weightier Conclusions, Improving Peer Accountability and Transparency Joint evaluations can be more authoritative because of the combined ‘weight’ of those backing them. As such evaluations are available to a wider audience, there is likely to be greater pressure to act upon the recommendations. Additionally, they provide a larger body of evidence for purposes of joint advocacy. When agencies open up to one another by sharing weaknesses and strengths, they increase transparency and make it easier for them to hold on another accountable for acting upon the recommendations. Transparency is critical for agencies in humanitarian responses, and sharing the findings of evaluations across agencies helps to become more transparent. In fact, agency peers may pressure the agency act on recommendations from an evaluation.

4. Learning from and Relationship Building with Peers Partners in a joint evaluation have a rare opportunity to learn about each other’s programming and operations, and may share technical knowledge through the evaluation process, but also through the ongoing relationships that are often established. One practitioner noted that working with staff from other agencies sometimes brings new perspectives or even changes her thinking about a particular issue.

The relationship building that occurs through a joint evaluation allows agency staff to identify other agencies’ strengths and capacities. The relationships, founded on trust, that are built through a joint evaluation, may result in agency cooperation in the future.

The Downsides of a Joint Evaluation 1. More Complexity It takes time, skill and patience to get agencies to agree to do a joint evaluation, agree on a manageable list of objectives, diffuse any tensions that may arise, ensure that group decision‐ making processes are clear and respected, all while dealing with hiring and supervising an evaluation team, setting up interviews, ensuring logistics are in place, etc. This becomes even harder during an emergency.

3 See the 2010 Indonesia joint evaluation report for more information, available at www.ecbproject.org/resources

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 6

Without a lead agency to take on the primary responsibility for these tasks, and a committed steering committee that can jointly handle strategic decision‐making, a joint evaluation can be frustrating and unsuccessful.

2. Less Depth Often it is not feasible or relevant to go into much detail on any particular agency’s programs as would happen in a single‐agency evaluation. Therefore many of the evaluation questions of interest to each agency may not get answered.

3. More Expensive Given the number of actions involved, joint evaluations can sometimes be more costly than single agency evaluations. If agencies agree to share the costs of the evaluation, however, additional costs per agency will be minimal.

Conclusion

Joint evaluations allow NGOs to learn from multiple perspectives and given them a more complete understanding of an emergency response. They help us work together now and in the future and lead to relationships that can be very productive. For these reasons they can be enriching experiences and have a profound impact on the way we do things as individual agencies and as collectives. It is important to have a realistic understanding of what can and cannot be accomplished by a joint evaluation before conducting one.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 7

CHAPTER 2: JOINT EVALUATION – WHEN, WHO AND HOW?

► You may want to do a joint evaluation and have good reasons to do so. But first, make sure there will be enough time for the evaluation, willing partners, and human, financial and other resources to get it done. The following questions are meant to help you determine whether a joint evaluation is feasible.

When will it take place? Evaluations can take place at different points of a response (during, immediately after, or several months after). The timing depends on what the agencies want to get out of the evaluation. Real‐time evaluations during a response provide results that can improve the response going forward (see Chapter 6). Evaluations conducted near the end of an emergency, on the other hand, capture experiences and learning while it is still fresh. Evaluations conducted well after an emergency ends can still be useful and can capture longer‐term impact of a response.

One important factor when creating a timeline for joint evaluations is to remember that working with multiple actors can slow you down. There is rarely a perfect time to conduct a joint evaluation, as all agencies are busy. Therefore, especially for real‐time evaluations, it is important to start planning as early as possible during the emergency response.

Who will take part in it? Approach other agencies that may already be considering an evaluation for the same humanitarian response. Consider agencies that have the same overall goal (e.g. ensuring affected populations are able to recover quickly from the disaster), and that have similar types of programs in geographic areas that are close enough together. Identify the appropriate person to contact, ideally someone who provides strategic direction for the country office. Explain what will be gained from doing this evaluation jointly (see Chapter 1). Listen to their views and note them down. Don’t be discouraged if they are not interested. Keep talking to other agencies.

When talking to other agencies, find out how they approach evaluations. Do they conduct them because donors require them? How do they use the findings? What resources do they designate for evaluations? Take note of this to get a sense of how each agency will approach the evaluation and use the findings. Their answers will also help prepare you for potential areas of conflict, such as willingness to contribute staff time. See The Tools, Suggested Topics for Discussion with Prospective Partners. Be sure the agencies are willing to commit staff time and resources to support the evaluation.

For a joint effort, and because evaluations may reveal sensitive issues, it’s also important to build trust among the agencies. To do so, agree on the focus of the evaluation together, rather than approaching others with your vision and asking if they are interested in joining in. Continue collaborating by communicating clearly, being transparent with information and intentions, and following through with commitments.

Is there enough time for a joint evaluation? Be sure to allocate enough time for the evaluation team to get the job done. Unless the logistics of getting to and from field sites is unusually time‐consuming, a thirty or forty day contract for the lead evaluator is reasonable. Ensure the evaluator has at least two days before officially starting the evaluation to do preparatory work such as to review documents, propose methodology, and plan logistics with the

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 8

agencies. Not ensuring enough time for the additional work will compromise the quality of the evaluation. Ensure that time is built in to account for missed deadlines, as may occur when many actors are involved.

How will it be paid for? Joint evaluations usually take more time to conduct, and may require a relatively large team. Costs, therefore, may be higher than for single agency evaluations. The costs can be spread out among agencies, and this should be discussed as part of the earlier negotiation.

Have a rough idea of what the evaluation may cost. The main costs are for hiring consultants and support staff. Compare this with what funds may be available and what other agencies may be willing to contribute, including staff time, lodging, and vehicles. If insufficient funds are available for the evaluation, consider a joint peer review to review one another’s programs and come together to discuss findings. Proper and realistic budgeting is critical.

Donors are likely to be receptive to joint evaluations if they bring about a better understanding of the context and the overall humanitarian response and some donors commission joint evaluations themselves. Therefore, if a budget funded by a given donor already accounts for an evaluation, the donor may be open to re‐directing that activity from the single agency’s evaluation to contribute to a joint evaluation.

How can the joint evaluation be most useful to various stakeholders? Evaluations take a lot of resources and effort and everyone wants them to be useful. Joint evaluations can be useful to different stakeholders in different ways. In a large emergency, agency staff at regional and global levels will likely be interested in the findings. Talk to people at the head office level in the country where The idea for the ECB‐supported Guatemala the emergency happened, at the regional level, and at evaluation came from headquarters. The team in headquarters level. Even if the findings refer to programs Guatemala felt that this was another HQ‐driven that have ended, can they be used to inform other initiative, so their participation in steering committee meetings was limited. The agencies on programs, systems and policies within the organization? the ground tried to customize the objectives, but in

retrospect felt they should have started from If the proposal for the evaluation came from headquarters, scratch. This negatively impacted the evaluation do those in the field, particularly country office leadership, process and thus the usage of the findings. believe that this will be a useful exercise for them? If not, they may not want to engage, and the evaluation will prove In contrast, the idea for the ECB‐supported joint hard to carry out. How will they use the findings? How evaluation in Jogyakarta also came from committed will they be to the evaluation? Their interest headquarters. However, the participating agencies and engagement need to be high to make this a successful on the ground took the lead on defining their experience. objectives, with advice from headquarters. This helped ensure the partners were more in control of There should be a reasonable level of confidence that the the evaluation process. findings will be used before proceeding with the evaluation. If not, the evaluation team will struggle to achieve the objectives.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 9

CHAPTER 3: WHAT TO DO BEFORE THE EVALUATION

► If you have decided to pursue a joint evaluation, here are some things to consider.

Choose a lead agency and agree on roles ECB has had the best results when one agency leads the joint evaluation process. Though some sharing of responsibility is desirable, agencies should designate the majority of the day‐to‐day management responsibilities to the lead agency. Any of the agencies being evaluated could serve as the lead; what matters is that the agency is A lead agency that plays its capable of carrying out the responsibilities. role well can make a major difference in the process. The The lead agency hires and supervises the evaluation team, coordinates head evaluator in ECB’s Niger travel logistics, provides team members with workspaces, organizes evaluation found the lead meetings, and gives leadership regarding the definition of the agency’s organization of the objectives. Ultimately, it is this agency that is accountable for ensuring evaluation process and logistics to be the most helpful that the evaluation takes place. thing to him in carrying out his

work. “It was one issue we A steering group made up of representatives from each agency can didn’t have to think about; it come together to agree on the roles of the lead agency, the roles was so well organized,” he assigned to other participating agencies, and share them with all noted. involved persons, staff and evaluators. (For more, see section below on ‘Management Structure.’) ”

Set up a management structure When setting up a management structure for the evaluation it’s important to recognize that you are managing not just an evaluation but a collaboration. Don’t succumb to pressure to make choices favorable to the lead agency, steering committee members or high‐level sponsors of the evaluation (e.g. “we must have x, y, and z represented, and any individuals will do”). Seek out individuals for the steering committee and evaluation team who are committed to a successful outcome, even if they are not conventional choices. Where there is a need for agency representation, create space for these individuals in some high profile, but less critical function. A joint evaluation management structure could look something like this:

. A steering committee. This group will be responsible for strategic decision‐making very early on regarding objectives, timing, and resource allocation, including staff and funding. The steering committee will also be active in reviewing and debating the findings and acting upon their implications within their agencies and beyond. It is normally chaired by the lead agency and has representation from each of the participating agencies. The committee would ideally be kept to a maximum of five, making oversight and decision‐making more focused and achievable in reasonable amounts of time. This, however, supposes that agencies involved are willing to delegate staff to a committee.

The ideal steering committee member is senior enough to speak on behalf of his/her agency and has the authority to make decisions. This individual must have a good knowledge of his or her organization’s emergency programs and on‐going development work. In addition, he or she should be able to think strategically, and know enough about evaluations to advise on the evaluation methods to be used and on the field locations to be covered. These individuals will also be those most likely to follow‐up on relevant recommendations.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 10

Steering committee members should agree on and document clear processes and standards of efficiency, transparency and accountability regarding roles and responsibilities. They should agree on how decision making would work, how to resolve disagreements within the steering committee, and how to share information. Steering committee members should be the same throughout the entirety of the evaluation. When members rotate on and off the committee, decisions and guidance may change, which will complicate matters for the evaluation team.

Other agreements concern the report format, the use of agency logos, the ownership of the products of the evaluation (i.e. intellectual copyright), how agencies will use the findings and if they will hold one another accountable, etc.

. A chairperson. This person is based at the lead agency for the evaluation and is a member and chair of the steering committee. He or she has most of the strategic decision‐making, operational and collaboration responsibilities of the evaluation. This individual usually assumes the role of evaluation manager and is the direct reporting line for the team leader. The chairperson should manage the budget and track expenses. . A manager or administrator. This person must ideally be based in the lead agency with a certain percentage of his or her time dedicated to the evaluation. See the following section for more details on the manager’s responsibilities. He or she could also sit on the steering committee but without voting rights. . The evaluation team. The team is typically composed of one or two independent consultants, and a representative from each of the partner agencies. This team is accountable to the steering committee, particularly the committee chair. There are variations on this structure, of course. Most evaluations also have higher‐level sponsors that may also form a superstructure. Sector experts may also be needed on the evaluation team.

See The Tools for a Sample Agreements Document.

Estimate costs and duration Based on the draft itinerary, the steering committee should agree on a draft budget and cost‐sharing arrangements. Typically agencies share consultant costs equally and provide funding for the staff member they appoint to join the evaluation team. Think through funding implications for all aspects of the process and how long each activity will take. For example, ensure funds for good quality editing, formatting, and presentation, as these can make a significant difference in how widely the report is read. Be realistic about the time it will take the evaluation team to get the job done. At least 30 days or even forty days are recommended for the team leader. The consultant will likely be the largest cost, but it is essential to budget for, as his/her tasks will include:

. Review documents, prepare methodology, and correspond with the steering committee prior to the evaluation. . Conduct field visits to at least three sites for each of the agencies. . Interview agency staff. . Interview other stakeholders. . Present the findings to stakeholders in country. . Prepare a draft of the report. . Incorporate edits and comments on the report from multiple actors.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 11

Communicate what the evaluation is about A joint evaluation is a newsworthy event, but not everyone will understand its purpose. Make sure people inside and outside the participating agencies, including beneficiaries, are aware of the evaluation so that they will be more likely to review and make use of the findings. Draft a one‐page informational sheet about the evaluation for widespread sharing, and especially with country office staff who need to be aware of the evaluation (though they would ideally be engaged throughout the process).

It is particularly important to have preparatory discussions with beneficiary communities to ensure they understand the purpose of the evaluation and they agree to participate in the evaluation. They should understand that evaluators do not have any assistance to give.4 In Haiti, the evaluation team trained 30 national staff from the participating agencies who spoke Haitian Creole to be responsible to engage beneficiaries. They did so by conducting focus group discussions, asking open‐ended questions to understand people’s experience with the emergency response.5

Find a competent administrator/manager Consider hiring someone who can spend a significant amount of time (50‐100%) focused on the evaluation, especially in the month or two leading up to it. This person may be an administrator, but should also be supervised by a senior person who can advise on strategic issues.

A superb administrator can make a major difference in the success of any evaluation but particularly a joint evaluation. Ideally, a national staff person should be hired or seconded from one of the agencies. He or she will be responsible for meeting the logistical and administrative needs of both the steering committee and the evaluation team.

A sample task list for this person could look like this:

 Organize the recruitment for the independent consultant(s).  Draft and process contract with consultant(s).  Arrange schedules and manage the calendar.  Arrange logistical arrangements for traveling in the field.  Coordinate information exchange between the agencies and the evaluation team, such as collecting the relevant background documents for the evaluation team.  Arrange meetings both with the participating agencies and with outside actors.  Help document who is responsible for what and share this with all parties.  Agree on norms for per diem and other policies. Typically, each agency follows their own and the coordinating one hires the externals and uses their per diems.  Meet with the evaluation team.

The steering committee or chairperson could appoint the evaluation administrator or manager. Ideally, the steering committee will define the authority level of the administrator, who he or she will report to, and what level of authority he or she will have to make decisions. It should be made clear the amount of time an administrator will provide to support the evaluation team.

Carefully pick evaluation team members Select the right team. In addition to the technical skills they need to conduct the evaluation, team members will also have to be good at balancing the needs of multiple clients with sensitivity. While their roles should be made clear before the evaluation, experience has shown that they will need to be flexible

4 See Tool 9 in the Good Enough Guide to Impact Measurement and Accountability in Emergencies 5 See the CARE‐Save the Children joint evaluation report at www.ecbproject.org

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 12

once the evaluation begins. The team should not be too large that it is difficult to manage. Three to four team members is usually sufficient. A typical team may be composed of:

. An independent consultant/team leader. This person knows a lot about evaluation and also has strong management and leadership skills, the ability to stay calm under pressure and to be adaptive in the face of the unexpected. A joint evaluation team leader also needs the ability to deal with multiple layers of management and balance various expectations, and thus must have strong diplomacy and communication skills – both written and verbal. The team leader should also have had previous experience as a team leader, since this is itself a special skill. Team leaders with these skill sets are sometimes hard to find, and it is critical that there is a budget line to pay for them.

Though it’s not always possible to recruit a team leader who has previously led joint evaluations, confirm that he or she has experience in impact analysis in emergencies, as he or she will need to understand how the various sets of data come together to form a bigger picture. Note that consultants often come with their own ideas and methodologies, and they will need guidance and parameters from the steering committee.

. A national consultant. The national consultant provides critical guidance on the political, social, and cultural context of the emergency to the team, especially to the team leader, who is often an expatriate. Having such a person on hand for a joint evaluation can help in networking with national stakeholders and ensure that knowledge is quickly transferred to the evaluation team about key actors and events and can minimize some of the complexity of the data and factors to be analyzed.

. A sector specialist. A joint evaluation will challenge the team of evaluators to address the wide range of program areas being covered while also focusing on selected key and priority aspects, especially as each agency may have unique interests. If agencies need more in‐depth examination of a particular type of program, they should consider bringing a sector specialist to the team, freeing up other members to focus on the overall picture.

. Agency team members. Each agency typically appoints one representative to the evaluation team. These individuals are not acting on behalf of their agency but rather must be impartial evaluators. The skill sets of these people, for example their expertise in certain sectoral areas, language and facilitation skills, and evaluation experience are very important to the overall success of the team. Ensuring that agency—and even country office—staff are represented on the evaluation team will increase ownership of the evaluation findings.

It may be hard for agency team members to be available for the full length of the evaluation, but experience has shown that continuity is important to evaluation quality and the learning experience is also greatly enhanced. Agency managers should therefore make every effort to ensure full participation of agency staff on the evaluation team.

Given the importance of getting competent team members, it’s important to start the recruitment process early. Good independent consultants—national and international alike—are often booked for weeks or even months in advance.

Once the steering committee has finalized the Terms of Reference for team members and the skills they want, agencies should consider requesting help from their head or regional offices in recruiting the team, such as doing the initial advertising and screening and then sending a shortlist of candidates to the lead agency.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 13

By the time hiring begins, the objectives for the evaluation should have been well enough defined that the steering committee is clear on what profiles are needed on the team and thus, who to hire.

See The Tools for Sample Terms of Reference for Evaluation Team Members.

Choose a few objectives to cover The participating agencies may have different interest areas they’d like to cover in a joint evaluation. But it is not practical to address too many objectives in a joint evaluation as there is already more content to cover. Ideally, there should be no more than three or four objectives within the terms of reference, and the scope should be as narrow as possible. For example:

. How well did the various agencies coordinate their responses? . How appropriate was the intervention? . How timely was the intervention? . How well did the response assist people in recovering from the disaster?

Objectives that concern the overall impact of the response are usually best for a joint evaluation. Objectives of unique concern to one or two participating agencies, such as issues of operational efficiency, are not generally appropriate. In areas where more depth is needed, hire an additional team member to focus specifically on a particular type of programming or issue.

Do ask for input on the scope from staff at different levels of each agency whom you expect to use the evaluation findings. At the same time, it is wise not to consult too widely, as you will run the risk of adding too many objectives and an unrealistic scope for the evaluation.

Objectives should be agreed upon before the evaluation team is hired. In fact, consider bringing in an external facilitator to negotiate the scope of the evaluation ahead of time. Once the lead evaluator joins, he or she should have the chance to tell the participating agencies what is feasible and realistic. It is critical to find a balance between what agencies want and what the lead evaluator believes is possible.

See The Tools for a Terms of Reference Template.

Agree on evaluation standards and methods Joint evaluations should include a document review, key informant interviews and focus group discussions with staff and beneficiary groups.

The team leader will build an approach to examine each agency’s work with enough rigor to inspire confidence in the findings, but not detract too much from a focus on the overall impact of the agencies’ response. However, the steering committee is expected to advise this process and also communicate the criteria they will use for village and beneficiary selection for the interviews.

In addition to more locations for field visits, for joint evaluations, there may also need to be more interviews with other actors, such as UN agencies, representatives from civil society, national and local partners, and government officials.

Certain indicators will be non‐negotiable to be in line with accepted international standards, such as the OECD/DAC standards for evaluation.6 Sphere standards are another key point of reference which should

6 http://www.alnap.org/resources/guides/evaluation/ehadac.aspx

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 14

be assessed during an evaluation, not only for technical sectors, but also core standards of participation. One set of standards should be used for consistency in measuring performance. Be clear on what organizational minimum standards are. Reference the Key Elements of Accountability on the ECB Project website.

Write an inception report The evaluation team should develop an inception report on the terms of reference and with a workplan. This report, written by the team leader, will ensure expectations are agreed on by the steering committee and the team itself. It also allows for the evaluation team leader to dialogue with the steering committee about what is realistic and feasible, given availability of staff, budget, and deadlines.

Manage communications within the collaboration Agencies conducting a joint evaluation need clear agreements around communication. Face‐to‐face meetings are critical to make sure understandings are clear and to build cohesion. Deciding how to store key documents is also very important. One solution is to set up a simple web page to upload documents, contact lists, schedules and other essential information.

It is also important to have regular opportunities along the way for the evaluation team to discuss any concerns with steering committee members. For example, early in the process, the team can give feedback as to how well the evaluation methods are working and check with the steering committee whether these should be modified. If the steering committee is engaged, the evaluation will be much more likely to succeed.

It is also important to agree in advance on principles of transparency with evaluation results, including communicating results in a transparent way to beneficiary communities (which could be in the form of a discussion or roundtable). Trying to cover up or hide evaluation results is not only against principles of accountability, but undermines organizational learning and can often backfire.

Prepare, prepare, prepare! Our experience has shown that a good amount of work can be undertaken even before the evaluation team arrives. Once the Terms of Reference for the evaluation has been established, a list of key informant interviews can be determined, meetings established, focal points ready, and preparatory documents can be emailed to the evaluator.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 15

CHAPTER 4: WHAT TO DO DURING THE EVALUATION

►If realistic objectives, management structure, and a competent team have been chosen and established, the evaluation should be easier to manage. The team will still need good logistical support and guidance. Here is some additional guidance on conducting the evaluation.

Brief the team upon arrival Ensure that the team has a chance to discuss the Terms of Reference with each of the steering committee members. The lead evaluator should also go over the Terms of Reference with the steering committee as a group. The administrator/manager or steering committee chair can brief team members on roles and responsibilities within the evaluation structure. The team will need to be briefed not only on the logistics and the process of the evaluation, but also on the response programs which are to be evaluated. The team will need to be clear on how the evaluation is run, the role of the lead agency and the other agencies, to whom the team reports, where they will get logistical support, and how they will maintain independence. When these are not clear, confusion abounds, and the team will struggle to achieve objectives. Anticipate the extra consultation time needed when estimating how much time the team will need for the evaluation.

Share findings as you go The team leaders should also ensure that the steering committee and the stakeholders (as mentioned below) receive regular updates throughout the process. If the steering committee and stakeholders are well briefed about the progress and initial findings of the evaluation, there will be no surprises at the end. Daily debriefs among the evaluation team draws out preliminary findings which the team leader can use to provide regular updates.

Ensure findings are reported with sensitivity Receiving and reviewing the findings of a joint evaluation can be an exciting time for the agencies but also a time of apprehension. The lead evaluator should present findings in a way that will not make any agency feel inferior or unfairly compared with others. Agencies will also inevitably look for mentions of themselves and judge whether they think the findings are fair. Findings that are critical in nature should be phrased in a constructive way, supported by reasonable evidence and balanced with positive feedback. In addition to the main report, the evaluator could also create short individual reports for each of the agencies. In practice, however, this may not be worth the additional effort since joint evaluations tend to be better at looking at the overall response and coordination between agencies (i.e. from a beneficiary perspective) than looking at individual agency operations in detail.

Another approach is for the evaluation team to do a preliminary analysis that compiles and groups findings. Through a workshop or meeting, the team can facilitate staff from the participating agencies (especially those who will use the findings) to collectively draw conclusions and recommendations. With this type of participation, agencies are more likely to accept the conclusions and feel responsible for acting on the recommendations.

Ultimately, when joint evaluations are well planned and agencies and the team communicate throughout the process, agencies are less likely to take issue with the results. A focus on learning makes even the least flattering findings more palatable because they can be instructive.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 16

Once the agencies have had a chance to discuss and debate the findings, they should discuss them with a broader group of stakeholders, especially those that were consulted during the evaluation process, such as UN agencies, beneficiaries, local NGOs, and local government. One way to do this is to hold an inter‐ agency validation workshop where stakeholders are given an opportunity to confirm or dispute the major findings and recommendations.

Finalizing the Evaluation Report The evaluation report should be easy to read and relatively short—no more than 30 pages. It is important to focus on that which has gone well, and good practice should be highlighted in the report.

Assuming stakeholders have been briefed throughout the evaluation, the findings and recommendations in the evaluation should not come as a surprise. Do anticipate, however, that stakeholders will not agree with all findings and the steering committee should be prepared to address this.

It is critical to set out a period to receive feedback on the draft of the report. Be clear and realistic about the timeline for this period. It needs to be long enough to allow the right people to provide feedback, but not so long that the findings are no longer relevant by the time the report is completed. After ensuring that all of the people who need to give feedback are informed of the schedule in advance, two to four weeks is a reasonable time in which to allow people to submit feedback.

The evaluation team leader is ultimately responsible to make the decisions about which feedback is incorporated and which is not. If there is enough disagreement about certain findings or conclusions, these can be addressed in a management response that is annexed to the final report.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 17

CHAPTER 5: WHAT TO DO AFTER THE EVALUATION

► With the evaluation work done and the findings detailed, one part of the process comes to a close. But in other ways, the real work is just beginning. Here is some guidance for making the most of the completed evaluation.

Develop both collective and individual rollout plans Because joint evaluations have relevance to a wide range of actors, agencies should share the report with humanitarian bodies and such networks as the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), in addition to headquarters. Sharing the report from a joint evaluation report widely demonstrates transparency and a commitment to contribute to learning within the broader humanitarian sector.

The agencies may want to develop simple collective and individual communications plans including distribution lists for the report and small action planning meetings to discuss and present the implications of the findings.

Emphasize peer accountability With joint evaluations, agencies have the opportunity to hold one another accountable for progress on recommendations. They may choose to work on some recommendations together. They may agree beforehand to hold a follow‐up workshop in six months or a year’s time. At such a time, they could then discuss how the findings were shared, what progress was made and what was the outcome of any actions taken.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 18

CHAPTER 6: JOINT EVALUATIONS IN REAL TIME

►Conducted while the emergency response is still ongoing, real time evaluations (RTEs) are valuable tools for rectifying problems and making improvements in programmes. However, a joint RTE can be especially challenging. Here are some things to consider if facing the decision of conducting a joint evaluation in real time.

Prepare for the evaluation before the emergency starts In cases of slow onset emergencies, there may be time to plan months in advance. Even with rapid onset emergencies, preparedness is possible. You can jointly outline generic plans for the evaluation which can be turned into actual plans in the face of a disaster. These plans should include as many of the aspects as possible that are outlined in this booklet on how to organise a joint evaluation, including, crucially, the designation of focal points. These focal points are on standby and will have the responsibility of getting an RTE process started, convening the various actors, etc. and who will ideally remain as point persons during the process.

Take a “good enough” approach to the evaluation You may have to take some shortcuts and use a “good enough” approach. “Good enough” does not mean second best: it means in an emergency response, adopting quick and simple solutions may be the only practical possibility. When the situation changes, you should aim to review your chosen solution and amend your approach accordingly.

For example, you can simplify managerial structures. Within days of the parties agreeing to do a joint evaluation, you may agree to establish a small, rapidly organized managerial structure (which can subsequently transition to a more robust one at a later stage). During the first week, for instance, that group could look at what is minimally necessary, and create a practical “quick‐and‐dirty” terms of reference. The management committee could delegate much of the day‐to‐day management to one or two key actors, and thus spend less time on group decision‐making and consensus building.

Each participating agency would be trusted to carry out the tasks assigned to them in accordance with pre‐determined plans and standards. Once the process has been started and the evaluation is in motion, agencies can then gradually build in tighter quality control mechanisms, more focused terms of reference, and a more inclusive process (e.g. a larger management group).

Such a ‘good enough’ approach is not an ideal evaluative process, but likely relevant for RTEs because agencies are particularly busy with the implementation of a response. However, certain aspects of the joint evaluation should not be subjected to shortcuts. These include ethical standards, such as the confidentiality and independence of the evaluative process.

Call on additional resources Participating agencies could consider calling on additional internal support. A staff member could be seconded to a country office for some weeks to help with the joint RTE. Unlike country office staff, who would presumably be preoccupied with the emergency response, this person would have time to focus on the evaluation. He or she could do an initial scoping, a stakeholder analysis and hold a meeting collectively or individually with partners to get their views. He or she could also assist with practical preparations for the team, including setting up field visits.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 19

Consider some other joint reflection process If a joint RTE is not realistic, consider other learning processes like a joint after action review or a peer review. Agencies can do quick assessments of their work (see Impact Measurement and Accountability in Emergencies: The Good Enough Guide) and get together for a short meeting/workshop, or invite an experienced expert to provide advice on how the operation may be adapted.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 20

THE STORIES

Niger About the Joint Independent Evaluation of the Humanitarian Response of CARE, Catholic Relief Services, Save the Children and World Vision to the 2005 Food Crisis in the Republic of Niger7

A crisis

When reports of Nigerien hunger, poor harvests, sick frail children and a slow humanitarian response finally caught the world’s attention, it was too late for many. By the time the UN received its first pledge in July 2005, and grain began its slow voyage across the ocean from India, the window for pre‐emptive action in Niger had closed. NGOS on the ground tried to help as many people as they could. CARE, Save the Children, Catholic Relief Services, and World Vision mounted responses that reached nearly 1.5 million people in the first round of food distributions, or 51 percent of the vulnerable.i

However, it seemed that the humanitarian community as a whole had missed the mark. Controversy over the emergency in Niger heated up in July and August of 2005. Why had the response been so slow? Had early warning systems failed? Did the media sensationalize the emergency? And was the Nigerien president’s defensive posturing an attempt to cover up the extent of the crisis?

Why a joint evaluation?

While conducting an emergency assessment for CARE Niger in August, Jock Baker, a program quality coordinator from CARE, felt that Niger was a good candidate for a joint evaluation. Having been involved in a multi‐agency evaluation of the Tsunami response conducted by CARE, World Vision and Oxfam, Baker felt that such an approach would work in Niger. “It was a big area and there seemed to be a systemic failure as a whole of the humanitarian community,” he said, reasoning that a multi‐agency evaluation would help fit the different pieces of the puzzle together.

He approached his colleague, Amadou Sayo, the assistant country director of CARE Niger, with the idea. Amadou was enthusiastic and promised to contact his colleagues at other agencies to check their interest. The idea took off and Save, World Vision and Oxfam agreed to partner with CARE on the evaluation and assign steering committee members.

In October, with food distributions still ongoing, the four agencies commissioned a joint evaluation of their responses. The evaluation yielded rich lessons not only about the crisis and its response, but also about the process by which such joint exercises are conducted.

The best and the worst of timing

One such lesson concerns the timing of the evaluation. Marianna Hensley, a program quality director for Catholic Relief Services, took over from a colleague as a steering committee member for the evaluation. When she joined the steering committee, about three weeks after the inception of the joint evaluation, they were reviewing the Terms of Reference for the evaluation team members to be hired. At the time of evaluation she was grateful for 12‐hour days. Upwards of 14 hours per day was the norm for Marianna and her colleagues during the ending phase of CRS’ food distribution in Niger. She was helping a partner

7 This report can be viewed at www..org

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 21

organization set up its therapeutic feeding program, managing nine budgets and their related activities, and communicating with three field offices. Things were even more intense for field staff who were in the field for weeks on end without a break. All were exhausted. Yet as the end of food distributions drew ever closer, it merely promised the start of recovery activities.

“We were in the middle of emergency operations, particularly food distribution. Had [the evaluation] not been pushed from the outside, I don’t think it would have been done,” she said.

Still, Marianna concedes, there were some merits to the timing of the evaluation. “A lot of the focus of the Niger multi‐agency evaluation was the extent of the food distribution and [to some extent] on the supplemental and therapeutic feeding. So theoretically, timing was right to come in at the end of the distribution to get the freshest perspectives on what happened.”

Whose idea was this?

Another lesson was that prior experience with or guidance about joint evaluations is necessary. The four country offices had never done a joint evaluation before and no one had time to A Surprising Discovery give it much thought: “There wasn’t a In July 2005, before he became involved in the evaluation, people whole lot of reflection that went into the were coming to John Wilding and asking for his interpretation of what was going on in Niger. But far away from Niger, the food security process,” said Marianna. “We were told, expert was as baffled as they were. All he could say was that he didn’t ‘Here’s a Terms of Reference. Go know. Like them, he was simply watching it unfold on television. participate in it.’ Steering committee members went to meetings, looked Even after he accepted the assignment as a team leader for the CARE, around at each other, and no one CRS, Save and World Vision multi‐agency evaluation and arrived in seemed to know what the original vision Niger with the rest of the team, the answer was not any more for deciding on a joint evaluation had apparent. “When we got there, we had team meetings at 6 o’ clock been.” every night for a week, asking ‘Is this a crop failure? A famine?’ We didn’t know.” One of the team members had a background in Breaking the news marketing. Wilding gave her a car and asked her to go and find out what had happened from a market point of view. “She was a street Yet another lesson concerned the fighter,” he said. “She went to the small merchants and the big presentation of the rather surprising merchants and she asked ‘What has happened? [You know,] we’re all results. When the team presented the in this for money.’ And they told her.” findings for the first time, the lead evaluator had the unenviable task of The conclusion that the team arrived at was not a popular one. It was explaining to four agencies that in not one that the humanitarian community and even universities were responding to a famine, they had not talking about. Certainly not the media, which was portraying it as a quite understood the true nature of the famine. The word from the street was that it wasn’t a famine. It was a case of localized crop failures. “I think the government was correct to crisis and that their response, though keep it under wraps,” said John. “I think the government understood effective, had been too late (through no what the problem was and I believe outsiders created a panic.” During fault of theirs) to help avert the crisis. the course of the evaluation, the people they talked to further confirmed this conclusion. However, one observer notes that it was very helpful for the same findings to be shared with several agencies represented in one room. “[The multi‐agency evaluation] created a globally accepted version of events in Niger particularly about how things came about. Initially there were many versions/reasons as to why the long term crisis developed as it did.”

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 22

Hindsight

Those who were involved in the evaluation believe that despite the difficulties of getting everybody involved in the process, the evaluation proceeded more or less smoothly, and the team found plenty of people with much to say.

Having gained a common understanding of the Niger crisis and established working relationships, the partner agencies continued to meet long after the evaluation team had left Niger. They formed an NGO coordination forum, called the GDCI, and invited other agencies to join. The Office for the Coordination of Humanitarian Affairs began to come to their meetings. They submitted a joint paper to the government of Niger describing the status of the GDCI and other coordination bodies in Niger and successfully lobbied the World Food Program for blanket feedings in vulnerable areas in early 2006.

In June of 2006, with some support from the ECB Project, the GDCI convened a workshop to chart a work plan and review progress against the findings of the evaluation.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 23

Guatemala About the Multi‐Agency Evaluation of the Response to the Emergency Created By Tropical Storm Stan in Guatemala – CARE, Catholic Relief Services, Oxfam8

An unusual opportunity

In late 2005, an unusual opportunity presented itself. The place was Guatemala, where half a million people were still reeling from the effects of Hurricane Stan. Six out of the seven ECB agencies were present in Guatemala and had been active in the response. They had shared information and coordinated their efforts thanks to their ongoing work in ECB’s Disaster Risk Reduction initiative . Given the sudden nature of the emergency, it had been an unanticipated coordination exercise and the agencies were keen to glean some lessons from the event. Members of ECB’s Accountability and Impact measurement initiative also saw this as an opportunity to further their objectives and urged the agencies to consider doing a joint evaluation of their response with some ECB support.

In December, emails flew back and forth between staff from the various agencies at headquarters and in the field to see if the agencies would make a commitment to do a multi‐agency evaluation. ECB was pressing for a decision as time was marching on and good evaluators are hard to find at short notice.

Advocates for the evaluation among the six agencies in Guatemala felt that the evaluation would give them a good baseline for measuring the future effectiveness of their work. Some of the agencies were considering evaluations anyway. They also felt doing the evaluation together would help them optimize resources. However, some of the agencies were skittish, not wanting to look bad.

Finally, all the talk led to an agreement that the joint evaluation and case study on coordination would be combined under the leadership of one external consultant and that work would start in late January. The evaluation steering committee was formed, met, and amended the joint evaluation terms of reference (ToR) accordingly to include objectives to review coordination. Unfortunately, the agencies couldn’t agree on whether to study impact or process. With these changes, parts of the ToR were not so clear. Some of this was due to translation as the English version of the ToR was translated into Spanish, the revised ToR translated from Spanish to English, and so on. In addition, local ownership was low as the ToR had come from headquarters. Agencies in Guatemala said that it would have been better to have started with a clean sheet of paper.

A non‐Spanish speaker probes unanswered questions

By the time January rolled around, there were many unanswered questions: Who was recruiting the other evaluation team members (three more were needed)? How were comments to be integrated into the ToR? What was the budget and how were costs to be shared? Given these unknowns, Pauline Wilson, the ECB accountability and impact measurement project manager, decided to go to Guatemala in early January even though her Spanish speaking skills were nil.

The visit was instructive. Not all agencies were clear on what the evaluation would really offer them and had not reached consensus on what to accomplish through the evaluation. The steering committee agreed that Wilson should meet with each agency director individually and see whether they wanted their agency to participate. It was agreed that if three agencies were interested in evaluating their emergency response activities, the evaluation would go ahead.

8 This report can be viewed at www.alnap.org

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 24

A series of individual meetings with each agency director ensued. Three agency directors agreed to evaluate their emergency response. All six said they would be interested in learning more about the coordination effort that went on and ECB’s effectiveness in communication and information gathering on behalf of the agencies. Views from the individual discussions with directors were shared during an ECB steering committee meeting along with each agency’s priority objectives, budget and cost sharing information and a schedule for the team.

A confusing Terms of Reference

The above discussions resulted in yet more changes to the ToR to describe the different ways that agencies wanted to participate. It became even more complicated as it was translated back and forth from Spanish to English and new changes were not reconciled clearly enough. The steering committee assumed that the team leader would meet with the agency directors or their representative soon after their arrival in Guatemala and negotiate what was doable, and clarify what they meant by the many words that were used in the ToR. This is a common practice for most evaluations, but it never happened for this one.

Who does what?

A major challenge in doing joint evaluations is getting a commitment from one agency office in a country to coordinate the activity on behalf of the other agencies. The coordinating office is expected to oversee the recruitment of the evaluation team and ensure they have support, e.g. a workspace, transport, per diem, help them organize meetings between the agencies, etc. These responsibilities were dispersed across a number of offices; a long list of responsibilities was put on a flipchart and people did put their name next to each specific responsibility.

One agency did commit to being responsible for recruiting the evaluation team and overseeing much of the evaluation process, though the person doing the work left just as the evaluation team began the evaluation in February. Her transition out at such a crucial moment affected the continuity of the process and left the evaluation team without guidance on where to go for support

While not having a clear lead presented many problems, agencies in Guatemala considered this sharing of tasks an interesting coordination effort, which pressured them to share responsibilities for a common goal.

In the field

With no briefing upon arrival and no clear lead agency to go to, the evaluation team lacked clarity and guidance from the agencies in Guatemala on what the evaluation was to achieve. The evaluation team decided to define their own field visit plan and almost created its own evaluation purposes. The team chose a very wide field sample, with a very tight schedule and a strong quantitative approach that in the end made it difficult to have a comprehensive understanding of the field reality. In some cases too much data was collected, yet some essential data to understand the impact agencies made in the field, especially from a beneficiary perspective, was missing. Agencies also did not convey to the evaluation team a clear understanding of what they meant by impact.

A “critical” presentation

The evaluation started in February and ended in late April. None of the agencies were happy with the results when they were presented. It wasn’t just the quality of the information being presented and that it

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 25

did not help them understand the impact. At least one of the agencies felt that the presentation was excessively critical and made unfair comparisons, e.g. comparing one agency against another in areas where they had very different types of interventions.

But most of the factors leading to the evaluation’s undoing had happened long before the presentation of the findings. Lack of clarity about roles and responsibilities, a poor recruitment process for the evaluation team, unclear objectives, and lack of understanding of joint evaluations are only some of the reasons why the evaluation process and report did not adequately meet the overall objectives.

After the dust settled

Disappointed, the agencies in Guatemala nonetheless took the findings seriously and each carried out an internal review process. ECB observers of the evaluation chalked up the Guatemala evaluation as a case of the “what not to do’s” in joint evaluations. However, when another member of the ECB team visited Guatemala in October 2006, she found some surprising perspectives on the evaluation.

For one thing, the agencies were planning a lessons learned workshop about the evaluation. At a meeting of the country directors of the agencies that same month, they had referenced the evaluation and how useful it had been. Even the agency that had been least happy with the results had made some improvements based on the recommendations, such as integrating risk reduction into its regular programming and adopting emergency protocols on where it should and shouldn’t respond. It had also begun operations in a new area of work, thanks in part to what it learned from its partners about the work in that area. Another agency realized the importance of staff training, had its staff participate in Sphere training, and has plans for future training on needs assessment.

One country director believes his agency learned a lot regarding following the Sphere standards, working with partners and working with the local government, in part because of the opportunity to learn from the programs and experiences of the partners. Additionally, he claims the agencies learned the importance of sharing not only information but tools, so as to avoid duplication. After the evaluation, the agencies took some steps to standardize things like assessment tools.

The assistant country director of one agency concedes that the evaluation was useful for his agency in that it validated some of their work and provided constructive criticism. However, “it wasn’t in‐depth enough to help us in our programmatic response,” he concluded.

“Initially, this process had a lot of inconveniences, but at the end we have a lot of richness,” said one of the staff who had been involved in the evaluation.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 26

Indonesia About the CARE, Catholic Relief Services, Save the Children and World Vision Indonesia Joint Evaluation of their Responses to the Yogyakarta Earthquake

In many communities you could still see the rubble of houses that used to be and telltale cracks in structures still standing. But recovery was well underway in affected areas of Yogyakarta, Indonesia when an evaluation team for Catholic Relief Services (CRS), Save the Children UK, World Vision and CARE visited communities a year after the earthquake that killed over 5,000 in May 2006. Just how effective the agencies had been in assisting those communities with recovery was what the team was there to discover. For the ECB Project, which had provided support to previous evaluations in Niger and Guatemala, there was also much interest in seeing how lessons learned about the process of conducting joint evaluations could be applied to this experience.

CRS had planned a joint evaluation in the Yogya emergency strategy they wrote in July 2006. This decision was influenced by the ECB Project’s initiative on accountability and impact measurement that had supported joint evaluations in Niger, Guatemala, and the Tsunami‐affected countries during 2005 and 2006. It was also prompted by the desire of Save the Children‐UK to conduct a joint evaluation.

Planning for the Yogya joint evaluation got underway in January 2007 when CRS and Save the Children jointly developed a terms of reference for the evaluation. A few months later, CARE and World Vision confirmed their interest in participating and a steering committee was formed with representatives from each of the four agencies. The agencies were driven by a motivation to learn from their response to the earthquake. All were new to Yogya, had many similarities in the types of programs they had delivered, and were affected by the same factors, such as the need to comply with conditions set forth by the government. Clearly, there was a lot to learn about how each had responded to these factors and from this, to draw conclusions about their collective response.

As the lead agency, CRS hired the evaluation team, gathered key documents from each agency, the UN, and the Government of Indonesia on the emergency response and sent them to the evaluation team. They negotiated the schedule of activities and the budget, organized logistics, and led discussions on methods with the lead evaluator. All steering committee members jointly agreed on major decisions. The agencies shared the costs of carrying out the evaluation with support from the ECB Project.

What went well

In general, the joint evaluation process went well. There was effective communication among the agency staff involved with significant trust in place. Communication infrastructure was good – there was reliable access to telephones, e‐mail, instant messenger service, and all the participating agencies were located within an hour’s drive of one another making face‐to‐face meetings relatively easy.

The lead agency played its vital management role well. The steering committee chair was successful in securing the commitment and trust of his colleagues and CRS staff did a good job in organizing all evaluation logistics, hosting the evaluation team and providing overall guidance to the evaluation team on the context and humanitarian response to the earthquake, and in the applicability of methods and questions to explore in the field.

Each of the participating agencies had monitoring and evaluation capacity, with three out of the four having monitoring and evaluation officers within their Yogya Emergency Response Teams. These staff

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 27

helped create openness within their organization to this evaluation, ensured rapid sharing of relevant documents and provided good advice to the evaluation team on methods.

The agencies also benefited from a super structure that is supportive of joint evaluations and collaboration in general. In particular, agency staff in CRS at Jakarta and headquarters level provided strong encouragement and support to staff in Yogya to lead the joint evaluation JE process. The culture of collaboration between these agencies promoted by ECB in Indonesia on disaster risk reduction encouraged them to try a joint evaluation.

There was a significant amount of learning and relationship building among the agencies involved in the process. The sharing of documentation and discussion when preparing for the evaluation provided an opportunity for steering committee members and monitoring and evaluation staff to learn about each other’s programs and approaches as did communications during the evaluation.

Finally, the agencies made commendable and successful efforts to share the results, hosting an inter‐ agency event to discuss the findings of the report with local and international NGOs, local government, UN agencies, the media and universities. The report was translated into Bahasa to ensure more widespread use within Indonesia. The steering committee made plans to add graphic design elements to the report and put the main findings within a more scan‐friendly PowerPoint presentation. With the intent that the report benefit the broader humanitarian community, the agencies made plans to share the report with the humanitarian learning network, ALNAP, as well as with an international distribution list.

What could have been improved

But the process was not without its shortcomings. With an overall evaluation period of 20 days, the evaluation team did not have enough time to visit enough locations for each of the four agencies. This scarcity of time was further exacerbated by the size of the evaluation team. Because the agencies wanted an independent team that would be seen as objective, they chose not to appoint representatives from each agency to the evaluation team as is usually done. In addition to the independent lead evaluator, only one staff member was assigned to the team. Instead, local facilitators, note takers and translators were hired. While excellent in their roles, local team members were relatively new to NGO work. The limited amount of emergency program experience on the team meant that specific sector areas of work were not assessed in depth.

As in‐country agency staff were not assigned to be part of the evaluation team, this also limited the level of interagency learning between the participating agencies and the depth of sectoral analysis. In addition, a national consultant was not hired. This meant that the evaluation team had to depend heavily on the lead agency for advice on methods and the larger context.

Another shortcoming was related to the time of the evaluation. Coming a year after the start of the response, it was such that affected people had begun to forget what was done and by whom, what went well and what didn’t.9

Other INGOs said they wanted to be a part of such joint evaluations and the four agencies agreed that other INGOs should have been involved, though some were initially invited and declined. As most are doing similar types of emergency response programs, there could have been great benefit to doing an evaluation of the work of all INGOs, though it would have taken more time to plan and the objectives would, perhaps, have been different.

9 At the same time, evaluations conducted a good while after the initial response, as this one was, can provide the opportunity to look at longer‐term outcomes.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 28

Conclusion

The agencies expect that the findings from this evaluation, being more holistic than an individual evaluation, will make a useful contribution to the humanitarian community’s understanding of emergency work in Indonesia and beyond, and demonstrate their accountability to one another and the communities with whom they work.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 29

THE TOOLS

Sample Terms of Reference for a Joint Evaluation

1. Background and context for the evaluation

2. Purpose of the evaluation

3. Specific scope of the evaluation (boundaries including what will not be part of the evaluation): . Geographic coverage . Time frame . Evaluation framework used and standards (OECD/DAC, Sphere) . Evaluation deliverables/outputs . Technical support to be provided by each participating organization . Administrative support to be provided by each participating organization . Logistic support to be provided by each participating organization

4. Administration/ Finance: . Resources required: Budget and staffing: how many people will be required, for how long, and at what cost . Determination of ‘ownership’ of the products of the joint evaluation, i.e. intellectual copyright, etc. . Identification of process/procedure for dealing with inter‐agency disagreements concerning any aspect of the joint evaluation . Agreement on report format, the use of agency logos, etc.

5. Methodology: . Standards against which performance will be assessed . How adherence to international standards will be assessed . Identification of the methodology and generally, what questions will be posed. Questions of a potentially sensitive nature should be discussed as early in the process as possible so that problems do not surface later. Some questions may be more sensitive to some organizations than others. . Identification of stakeholders and their level of involvement in the process . Data sources . Consideration of gender issues and vulnerable groups

6. Team Composition: . Team Leader with a description of role and responsibilities . Additional team members with description of roles and responsibilities . Team coordination configuration

7. Management of the Evaluation Process: . Intended use of evaluation results and parties responsible for follow up . Reporting structure within the joint evaluation configuration . Periodic reporting schedule throughout the process . Reporting requirements . Agreement on how to deal with fraud, misconduct or wrong‐doing uncovered as part of the evaluation process Source: CRS Guidelines for Participation in Joint Evaluations: Final Draft

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 30

Sample Terms of Reference for Evaluation Team Members

Background & Objectives

See the TOR for the Evaluation.

Team Composition

The team will consist of:

. A Team Leader who will be an external consultant . A staff member from each of the participating agencies who is based in a country other than the host country and has not been directly involved in the emergency response . A national consultant

1 a.) External International Consultant (Team Leader) ‐ Responsibilities

The external consultant will act as Team Leader (TL) and will retain overall responsibility for meeting the objectives as detailed in the attached evaluation TOR (including editing authority for the final report), ensuring effective process and team management, and that outputs are of good quality.

Description of Tasks i. Reporting

a) Development of inception report, up to three pages with a chapter plan and key areas of enquiry, to be submitted within four days of start of the consultancy b) Submission of combined draft report, within one week of completion of field work c) Submission of final report as per the specified schedule. The report should contain an executive summary (no more than five pages), key findings, conclusion and recommendations, as well as annexes.

ii. Lead an orientation briefing for stakeholders to ensure a common understanding and expectations regarding the scope and objectives of the evaluation.

iii. Organize and facilitate relevant meetings, focus group discussions, and key informant interviews. Undertake associated data collection activities (e.g. document review/research) as foreseen within the scope of this evaluation. Some of these activities may take place outside of the field mission, e.g. telephone or face‐to‐face interviews with HQ‐based staff.

iv. Collate, analyze and synthesize data and other information collected during the course of the evaluation.

v. Prepare a daily written summary of interviews and “Main Points” in conjunction with the Team Members to assist with ongoing analysis and synthesis of information relevant to objectives.

vi. Delegate specific areas of responsibility to the other team members to maximize use of relevant skills and facilitate triangulation during analysis.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 31

vii. Design, organize and lead (if appropriate) the debriefing for participating agencies (and key partners, if appropriate) on the preliminary findings, conclusions and recommendations within each region and at the end of the field mission.

viii. Ensure compliance of the team with international humanitarian assistance standards and take reasonable steps for ensuring that the security and dignity of the affected population is not compromised and that disruption to ongoing programmes is minimized.

1 b.) External International Consultant – Team Leader Profile

Required: i. Fluent English and excellent communication skills; good working knowledge of the local language.

ii. Track record of substantive Team Leader and evaluation management experience.

iii. Extensive humanitarian programming management experience.

iv. Knowledge of mandates and modus operandi of principal humanitarian actors (host government, NGOs, UN agencies, Red Cross Agencies, etc.).

v. Proven writing skills

Preferred: vi. Experience/expertise in the area of rapid onset emergencies.

vii. Has skills and experience in one or more of the following areas: food economy/livelihood security, water and sanitation, public health, programming, management, finance/admin, logistics, human resources, post‐crisis rehabilitation, disaster risk reduction, or participatory approaches in emergencies.

2 a.) Internal Agency Team Members – Responsibilities

Given this is an external evaluation, the agency team member does not represent his or her agency as such, but using an objective approach under the TL’s leadership, she or he provides guidance to the team on relevant aspects of the agency’s mandate, policies, strategies, and “realities.” The agency team member’s inputs should ensure that findings and recommendations targeted at that agency are well‐ informed and practical. Specific tasks are as follows:

i. Together with the other members of the evaluation team, provide an orientation briefing to stakeholders to ensure a common understanding and expectations regarding the scope and objectives of the evaluation.

ii. Participate in and, as necessary, facilitate relevant meetings, focus group discussions, key informant interviews and undertake associated data collection activities (e.g. document review/research) as foreseen within the scope of this evaluation. Some of these activities may take place outside of the field mission, e.g. telephone or face‐to‐face interviews with HQ‐ based staff.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 32

iii. Provide agency specific information and contextual background to assist the TL in his or her analysis to ensure the findings take account of agency context and that recommendations are realistic.

iv. Collate, analyze and synthesize data and other information collected during the course of the evaluation.

v. Prepare a daily written summary of interviews and “Main Points” to share with the Team Members to assist with ongoing analysis and synthesis of information relevant to objectives.

vi. Draft inputs on specific areas of responsibility (as designated by the TL) for integration into the draft and final reports.

vii. Participate in the debriefing of the agencies (and key partners, if appropriate) on the preliminary findings, conclusions and recommendations at the end of the field mission.

viii. Assist the TL tasked with drafting the report and completing the final version of the report.

2 b.) Internal Agency consultants – Team Member Profile

Required: 1. Fluent English and excellent communication skills; good working knowledge of the local language.

2. Extensive humanitarian programming/project management experience.

3. Knowledge of mandates and modus operandi of principal humanitarian actors (host government, NGOs, UN agencies, Red Cross Agencies, etc.).

Preferred: 4. Experience/expertise in the area of rapid onset emergencies or natural disasters.

5. Has skills and experience in one or more of the following areas: risk reduction, food economy/livelihood security, programming, management, finance/admin, logistics, human resources, post‐crisis rehabilitation, disaster risk reduction, participatory approaches in emergencies, monitoring and evaluation methodologies, or analysis.

3 a.) External National Consultant ‐ Responsibilities

A national consultant will be selected by the agencies to provide to the evaluation team necessary analysis and background of the country, to allow findings and recommendations to be placed in an appropriate context.

1. Together with the other members of the evaluation team, provide an orientation briefing to stakeholders to ensure a common understanding and expectations regarding the scope and objectives of the evaluation.

2. Participate in and, as necessary, facilitate relevant meetings, focus group discussions, key informant interviews and undertake associated data collection activities (e.g. document review/research) as foreseen within the scope of this evaluation. Some of these activities may take place outside of the field mission, e.g. telephone or face‐to‐face interviews with HQ‐based staff.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 33

3. Collate, analyze and synthesize data and other information collected during the course of the evaluation.

4. Prepare a daily written summary of interviews and “Main Points” to share with the Team Members to assist with ongoing analysis and synthesis of information relevant to objectives.

5. Draft inputs on specific areas of responsibility (as designated by the TL) for integration into the draft and final reports.

6. Participate in the debriefing of the agencies (and key partners, if appropriate) on the preliminary findings, conclusions and recommendations at the end of the field mission.

7. Assist the TL tasked with drafting the country report and completing the final version of the report.

3 b.) External National Consultant – Team Member Profile

Required: 1. Fluent English and excellent communication skills; good working knowledge of the local language.

2. Expertise in the context and cultural issues of the country, including all its diverse ethnic groups and past experiences with international aid activities.

3. Humanitarian programming/project management experience.

4. Knowledge of mandates and modus operandi of principal humanitarian actors (host government, NGOs, UN agencies, Red Cross Agencies, etc.).

Preferred: 5. Has skills and experience in one or more of the following areas: risk reduction, food economy/livelihood security, programming, management, finance/admin, logistics, human resources, post‐crisis rehabilitation, disaster risk reduction, participatory approaches in emergencies, monitoring and evaluation methodologies or analysis.

6. Experience in monitoring and evaluation methodologies and analysis.

Duration

For the Team Leader, an estimated total of 28 days of fieldwork are foreseen from the start date, with an additional seven days for report writing at the end of the assignment. For the national consultants and internal agency staff, an estimated total of 28 days fieldwork and three days report writing is expected.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 34

Sample Agreements Document

Ownership of the Products An explicit agreement or ‘protocol’ should include designation of ‘ownership’ of the process and its products (e.g. the report and any other evaluation products such as articles, brochures, etc.). Ownership entails a series of aspects, from legal rights and obligations, to decision‐making authority, especially in the event of disputes. Such a protocol would also include agreement and guidance on decision‐making processes (quorums, voting versus observer rights, etc. and how these relate or are invoked for different stages or aspects of the process, e.g. more day‐to‐day management decisions will be delegated to the evaluation administrator/manager, and maybe the team, while more sensitive or important decisions will be made at higher levels).

Establishment of the Management Structure Agreement and formalization would need to cover procedures for establishment of the management structure; dismantling of the structure (including consideration of future ownership of the products of the joint evaluation; a disputes‐management mechanism (as in labour law, where potential disagreements will be addressed in an agreed manner); broad expectations for financial, material and human resource provision and management (including contracting the evaluation team and other resource provision, especially for day‐to‐day secretariat functions), etc..

Incorporation of Feedback Of particular importance is explicit agreement on the incorporation of comments on the report, especially if they are from within the overall management structure, i.e. certain types of comments are always to be acted upon (such as errors of fact or inadequate verification) while others are of an advisory nature only (e.g. interpretations or analysis). It may be useful to refer to the following checklist when reviewing the draft report:

. Consistency with the TOR ‐ is the report consistent with the objectives described in the original TOR?

. Style and Clarity ‐ is the report of an acceptable professional standard? Is the language and format reader‐friendly for targeted stakeholders? Are the findings clearly stated with adequate supporting evidence? Does it strike an appropriate balance between conciseness and enough content so that it’s not too general?

. Potential usefulness of the report to stakeholders. Is it clear what is being recommended and to whom? Do the recommendations provide adequate guidance for follow‐up or are they too general? Are they realistic? Does the Executive Summary fulfill its purpose, i.e. provide a concise summary of the report, particularly for senior level decision‐makers who may not read the entire report?

. How successful has the evaluation been in prioritizing key issues? Are there too many recommendations? Are there any critical omissions?

. Are there any important corrections and/or omissions to the text?

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 35

Management Response & Action Planning Once the report has been finalized, the next important step is for the evaluation steering committee to discussed the recommendations and agree on whether to “accept,” “partially accept,” or “reject” a particular recommendation. If a recommendation is rejected, then it will be important to explain why. If a recommendation is “accepted,” then a clear follow up action plan with individual responsibilities for follow up should be entered in the management matrix. A partially accepted recommendation contains both elements: an action plan for follow up and an explanation why the recommendation was not fully accepted.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 36

Joint Evaluation Readiness Checklist

Assessment and scoping of the value of conducting a joint evaluation . What is the added value of a joint exercise as opposed to individual exercises? . Is there sufficient buy‐in for a joint evaluation, including understanding of the complexities, costs and benefits? . What agencies and other possible stakeholders might be interested and involved? . Can they be consulted? . Who will conduct the consultation, how, when and where (e.g. in an agency headquarters, the field, or a neutral venue)? . How much time is available for the joint evaluation? . How large and broad can the evaluation be, in function of the time and resources available? Establishing a multi‐agency management structure . Can the main actors be identified and committed to the process, e.g. through their unambiguous commitment to provide time and resources? . Can a lead or host agency be identified? . Among those actors, what is the most effective and efficient management structure? (See organigram below). . Can an explicit agreement or protocol be signed on roles, responsibilities, rights and obligations of all concerned? (See example below under sample tools).

Designing the Joint Evaluation and Terms of Reference . What are the possible uses of the proposed evaluation and who will use the findings? This includes prioritization of those uses by possible target users and audiences for the products of the evaluation. . What are the priority uses and therefore the main evaluation questions that need to be addressed within the time and resource limits available? . What might be the expected methods in light of the purposes of the evaluation and coverage? . What other aspects are needed for the ToR? Possible stakeholders to be involved, locations to be visited, duration, report style and length, etc. might be considered.

Team selection, preparation and planning . What evaluator profiles are needed, and what will the size of the team be? The decision should be made by the steering committee. . How and by whom will they be selected and contracted? For example, delegation of selection to the management sub‐committee is based on accepted standards of professionalism, independence and transparency. . How will ongoing coordination and information flows be managed among the main actors (management sub‐committee, lead agency, JE manager, and the evaluation team)?

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 37

Conducting the Joint Evaluation, including analysis and reporting . What methods and expertise are required to cover the range of issues, locations and aspects in the evaluation? This would mainly be the responsibility of the team and manager. . What joint activities, including workshops and meetings, are required to facilitate quality understanding and analysis? . When, how and by whom will draft reports be reviewed? (See sample protocol below).

Dissemination and use . How many and what type of products will result from the joint evaluation, in order to meet the needs of the diverse sets of target groups/audiences, including individual and joint agency initiatives? . How, by whom and by when will the products be disseminated and communicated? Agreement on a communication plan is required at the outset during the planning stage. . What resources are available from whom for the implementation of the plan, and for unforeseen costs? . As a complement to the communication plan, can a joint follow‐up action plan be developed to address issues in the evaluation? . Is a new structure required to implement and monitor relevant recommendations? Agencies may wish to take the results forward into a new ‘review‐and‐action’ process. . Will there be a review of the joint evaluation itself, to identify lessons from the exercise? This would probably require at least one workshop or meeting of all main actors and stakeholders.

What We Know About Joint Evaluations The ECB Project [email protected] April 2011 (v6) 38

Suggested Topics for Discussion with Prospective Partner Agencies

Partner 1 Partner 2 Partner 3 1. Is your organization planning to carry out an evaluation?

2. If so, what will it be used for?

3. What resources have been set aside for it (funds, staff time, etc.)?

4. What is the current evaluation capacity within your organization (e.g. previous experience with evaluation, staff with monitoring and evaluation skills)?

5. When is it planned for and how much time has been set aside for it?

6. If an evaluation has not been planned, what types of information could you usefully gain from an evaluation?

7. How are evaluations viewed within your organizational culture?

8. What is your understanding of joint evaluations?

9. What do you believe you could gain from a joint evaluation?

References and Further Reading

Joint Evaluations

Guidance for Managing Joint Evaluations. DAC Evaluation Series, OECD 2006. http://www.oecd.org/dataoecd/29/28/37512030.pdf

Joint Evaluations: Recent Experiences, Lessons Learned, and Options for the Future. DAC Evaluation Network Working Paper, OECD, 2005.

Lessons About Multi‐Agency Evaluations: Asian Tsunami Evaluation Coalition. http://www.tsunami‐ evaluation.org/NR/rdonlyres/9DBB5423‐E2EF‐43AB‐B6D2‐ 2F5237342949/0/tec_lessonslearned_ver2_march06_final.pdf

General Evaluations

USAID Center for Development Information and Evaluation, Performance Monitoring and Evaluation TIPS series. http://evalweb.usaid.gov/resources/tipsseries.cfm

Western Michigan University, Evaluation Center. Evaluation Checklists http://www.wmich.edu/evalctr/checklists/checklistmenu.htm#models

Shoestring Evaluation: Designing Impact Evaluations Under Budget, Time, and Data Constraints. M. Bamberger, J. Rugh, M. Church, and L. Fort, The American Journal of Evaluation, 2004.

Utilization‐Focused Evaluation Checklist. Michael Quinn Patton http://www.wmich.edu/evalctr/checklists/ufe.pdf

i Joint Independent Evaluation of the Humanitarian Response of CARE, CRS, Save the Children and World Vision to the 2005 Food Crisis in the Republic of Niger, November 2005.