The Evaluation Report by T.H. Bakker & L.C. Gorringe
Total Page:16
File Type:pdf, Size:1020Kb
The Evaluation Report by T.H. Bakker & L.C. Gorringe
Stockholm University School of Busines Fall 1998 Undergraduate study Supervisor: Sten Köpniwsky
The Evaluation Report
A tool for improving development assistance or an account to please the decision-makers?
Tabitha H. Bakker Lena C. Gorringe
1 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Contents
Introduction______4 Goal and objectives______5 Delimitation______5 Methods used______5 Structure of the thesis______6 1. Historical overview of development assistance______7 Summary______8 2. An introduction to evaluation of development assistance______9 2.1. Definitions______9 2.2. Ideas on evaluations since the 1960's______9 2.3. The function of evaluations______11 Summary______12 3. The evaluation report______13 Introduction______13 3.1 General goals______13 3.1.1 General guidelines______13 3.1.2 Examination of implementation______14 3.2 Methods used for the evaluation of development assistance projects______14 3.2.1 Terms of reference______14 3.2.1.1 General guidelines______14 3.2.1.2 Examination of implementation______15 3.2.2 The Logic Framework Approach______15 3.2.2.1 General guidelines______15 3.2.2.2 Examination of implementation______16 3.2.3 Economic Analyses______17 3.2.3.1 General guidelines______17 3.2.3.2 Examination of implementation______17 3.2.4 Fourth Generation Evaluation______17 3.2.4.1 General guidelines______17 3.2.4.2 Examination of implementation______18 3.3 Data Collection______18 3.3.1 General guidelines______18 3.3.2 Examination of implementation______18 3.4 Evaluation results______19 3.4.1 General guidelines______19 3.4.2 Examination of implementation______19 3.5 Participatory evaluation______21 3.5.1 General guidelines______21 3.5.2 Examination of implementation______22 Summary______23
2 The Evaluation Report by T.H. Bakker & L.C. Gorringe
4. Cost-benefit analysis______24 4.1 General principles of CBA______25 4.2 Use of CBA in evaluating development assistance______27 4.3 Summary and conclusions______28 5. Sustainable Development Records______31 5.1 Introduction to Sustainable Development Records______31 5.2 Description of Sustainable Development Records’ methodology______32 5.3 Usefulness of SDR in evaluating development assistance______34 Summary______35 Conclusions______36 References______39
3 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Introduction
Our thesis aims to be a critical examination of how evaluations of development assistance projects are conducted. General opinion seems to be that money, put into development assistance (DA), is wasted on large, ineffective and even useless projects. The general public is more and more unwilling to have their tax money being used for DA. It is still being assumed that DA consists mainly of large infrastructure projects, while this actually was the case during the fifties and sixties. A significant part of the public has a negative attitude towards both DA projects and the people that are the recipients of this assistance. It is believed that recipients lack both the capability and capacity to handle what is being offered by the donors. All and all public opinion on DA has been colored by increasingly negative undertones.
At the same time, donor countries have cut down their budgets for DA considerably. Economical hardship and the realization that few DA projects have given desirable results over the years, has lead to a decrease of DA from many countries. It’s hard to defend a DA budget, cutbacks on public expenditures within the donor country itself are taking place. The end of the Cold War resulted in a decline in DA from both the USA and the Soviet States. Countries, receiving DA before 1991 because of the political agenda’s of these two super powers, have gotten into considerable financial problems since then. One example is Vietnam, another one is Cuba.
The necessity of DA is beyond questioning. As long as the world’s collective wealth is divided as unequally as it is today, we find that it is the obligation of the privileged countries to contribute to the development of the rest of the world. In order to be able to manage downsized DA budgets, and at the same time guarantee continued or even increased quality of the projects, relevant evaluation is a prerogative. It is a necessary tool for feedback and learning lessons that can lead to improvement in efficiency and effectiveness of DA.
Our interest in evaluations lies in critically exploring how these are conducted and what they bring about. We are especially interested in the economical analysis of DA. When measuring yield of social projects, the most relevant method seems to be the Cost-Benefit Analysis (CBA). The fact that recipient countries in general have heavily distorted economies, demands that specific attention has to be given to this method. Social CBA strives to diminish this problem by giving additional importance to exchange rates, investments and distribution. The main difficulty with CBA seems to lie in determining the rate of return and an estimation of the monetary value of costs and benefits. Sustainable Development Records (SDR) is a method used to measure non-monetary values without translating them into monetary values and without discounting them. We will therefore also explore the possibilities of using SDR for the evaluation of DA.
4 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Goal and objectives
This thesis addresses those interested in evaluation methods for DA projects: officials both at donor organizations and intermediary organizations in the recipient countries; consultants concerned with evaluating DA projects; students and academics at the departments for business administration, economics, human geography and cultural anthropology. Our goal is to create an awareness of the possibilities that evaluation has, when evolved to its full potential. Also, we hope that the material presented in this study can be a basis for others to make recommendations upon.
The objective of our study is to critically examine evaluations of DA projects. We intend to discuss the following topics: General trends in DA Characteristics of evaluation of DA Goals to be achieved with evaluation of DA Methodology of evaluation of DA Evaluation process Evaluation practice Application of CBA in evaluation of DA Application of SDR in evaluation of DA
Delimitation
This thesis is a critical examination only and does not intent to give recommendations. Without more elaborate knowledge than we posses, our recommendation would not be much more than platitudes. The evaluation methodology is examined only generally. Apart from CBA and SDR, we have chosen not to further analyze the different methods that are used in evaluations. Only the policies and experiences of Swedish and Dutch DA have been scrutinized in larger detail, since they have similar policies and goals for their DA projects.
Methods used
The literature study carried out in this thesis discusses both academically conducted studies, and evaluations and reports by government agencies, intergovernmental agencies and non- governmental organizations that describe evaluations of DA projects. Furthermore two interviews have been carried out; with Anders Berlin, economist at SIDA’s Department for Evaluation and Internal Audit (UTV); and with Bo Olin, economist and consultant at the Stockholm House of Sustainable Economy.
We have started with a general inventory and comparison of the data presented. Next we have extracted generally applicable principles concerning the evaluation of DA projects and its practical implementation, focusing on a critical attitude.
5 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Structure of the thesis
Chapter one explores how, during the last 50 or so years, ideas on DA have gone from development aid through development assistance to development cooperation, with a strong emphasis on sustainability.
In chapter two the changes are described that have occurred within the DA concept. They have had a significant impact on the ideas on how evaluations of DA projects should take place. An evaluation is an assessment of the project’s design, implementation and result and has both a leaning function and forms a basis for accountability. Participatory development has lead to the realization that participation of all stakeholders is a necessity even in evaluating DA.
The third chapter is an elaboration on how the evaluation report comes about. The report is an assessment of the achievements of the project concerning relevance, impact, effectiveness, efficiency and sustainability. For this, a succession of methods is used: terms of reference, logical framework approach, cost-benefit analysis, cost-effectiveness analysis and lately the fourth generation evaluation has been developed. Participatory evaluation is slowly becoming the norm as it is described in many policies.
Chapter four takes a closer look at CBA as it is used for the assessment of DA projects. The recognized method when conducting an economical analysis of social project is the cost- benefit analysis or CBA. The fact that DA projects are carried out in often heavily distorted economies demands special modifications of the CBA. This is being considered in the social CBA, which strives to diminish this problem by giving additional importance to exchange rates, investments and distribution.
Sustainable development records are discussed in chapter five. The emphasis that has been put on sustainable development within DA projects during the last decade has drawn our attention to the sustainable development records, or SDR. This method is an operating tool for directing an activity towards an increasing sustainability. Although the method, as far as we know, has never been used for evaluations, the first models to aid monitoring of DA projects have been developed.
We finish our paper with an overview of the conclusions we have drawn from the material presented in this study.
6 The Evaluation Report by T.H. Bakker & L.C. Gorringe
1. Historical overview of development assistance
This chapter explores how, during the last 50 or so years, ideas on AD have gone from development aid through development assistance to development cooperation, with a strong emphasis on sustainability:
Approaches to development have changed considerably over the last 50 years. Since the first DA projects started in the 1950s, planners and financiers have come to the conclusion that the strategy, of imposing their own (Western) models of development upon developing countries, does not give the desired results. In the worst case, it can even be harmful for the environment in which the project takes place. Instead sustainability has become increasingly important factor when planning, implementing and evaluating development assistance. One has gone from a centralist to a participatory principle. The tendency of DA projects is that the emphasis lies more and more on recipient responsibility [Croll&Parkin 1992]. The recipient bears the responsibility in all phases of the project/program: planning, implementation and evaluation. The role of the donor is changing from being an "aid provider", to being a "money provider", a financier.
At first the objectives and the modes of implementation were largely determined by a small group, usually consisting of representatives of donor organizations, local government and the project staff. The local population might have been used as informants and sometimes consulted for their opinions. In participatory projects the objectives, planning and implementation are the outcome of choices and decisions made by the local community itself.
There are three main phases to be recognized in the evolution of attitudes. 1950-1970, urban-based industrial development 1970-1980, dependency theory 1980- current, participatory development
The Marshall Plan, with massive capital investment in the new industries, successfully stimulated the resurgence of European industry after the devastation of World War II. This encouraged development theorists to follow a similar reasoning process when they turned their attention to the poor nations of Latin America, Africa and Asian; nations that were seen as socially, economically and politically backward. There was a strong preference for the urban-based industrial developments, to provide jobs for the urban poor; the wealth created by these industries would, when invested in subsidiary enterprises, create more jobs and generate further wealth. From this wealth taxes should be raised to invest in schemes of public health and education.
The realization that this "modernization" mainly benefited the elites of the developing countries changed attitudes to development towards the end of the 1960s [Dickenson 1996:15]. The world-system theory became dominant. All parts of the world are integrated, politically as well as economically, into the world-system dominated by the "core" (North America and Western Europe) with a "semi-periphery" existing of richer Third World states
7 The Evaluation Report by T.H. Bakker & L.C. Gorringe
and a "periphery" made up by the poorest countries. It was the dependency on the core that constrained development in other parts of the world. The third phase, starting in the 1980s shows a recognition of the possibilities for indigenous development models and strategies: a DA, using models, assumptions and objectives formulated within developing countries themselves is becoming the tendency. Also, "environmental projects" have become increasingly important in the development field [Croll&Parkin 1992:131]: the principle objective is sustainable management of nature and/or natural resources.
Participatory development is a method which involves all stakeholders in the project [MFAF 1997:73]. It is a partnership based upon dialogue among the various stakeholders where; (a) the agenda is set jointly; (b) local views and indigenous knowledge are deliberately sought and respected; (c) the project agenda is set through negotiation rather than external prero- gatives; and (d) beneficiaries become actors.
We find that donor organizations must realize that indigenous knowledge of the environment is crucial in order to guarantee the project's sustainability. Sustainability has two important aspects, these are environmental consequences on the one hand and project life span on the other. In order to guarantee sustainability it is imperative that the local population is a part of the DA project from the beginning to the end. They should have an important role in the planning, as well as the implementing and evaluation phase of the project. It is the participation of the local population that is decisive for the project's life span. If they do not recognize the goals and objectives of the project, it will cease to exist after the donor organization has withdrawn and handed over the responsibilities.
Gender analysis is linked directly to efficiency and sustainability [MFAF 1997:20]. Involving both women and men means that the resources will be used more efficiently. When both men and women take part in the planning and implementation of a project, the objectives are more widely accepted.
Summary
From the late 1980s and onward, participatory development has resulted in the recipient, in many cases the local population, becoming the main focus of attention. More and more of the responsibility for the planning, implementation and evaluation of DA has shifted towards the recipient.
8 The Evaluation Report by T.H. Bakker & L.C. Gorringe
2. An introduction to evaluation of development assistance
The changes that have occurred within the DA concept have had a significant impact on the ideas on how evaluations of DA projects should take place. An evaluation is an assessment of the project’s design, implementation and result, and has both a leaning function and forms a basis for accountability. Participatory development has lead to the realization that partici- pation of all stakeholders is a necessity even in evaluating DA.
2.1. Definitions
Evaluation of development assistance, as defined by the OECD, Organization for Economic Cooperation and Development [NDC 1995:15-16], is: "an assessment, as systematic and objective as possible, of an ongoing or completed project or program or policy, its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives: developmental efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors."
DAC uses the following definition [Carlsson 1994:9] of evaluation: "An examination as systematic and objective as possible of an on-going or completed project or program, its design, implementation and results, with the aim of determining its efficiency, effectiveness, impact, sustainability and the relevance of the objectives. The purpose of an evaluation is to guide decision makers."
SIDA, the Swedish International Development Cooperation Agency, defines its evaluation policy in the following manner [SIDA 1997:5]: "An evaluation is a careful and systematic ex-post assessment of the design, implementation and results of an activity in relation to its objectives. Within the field of development assistance, subjects of evaluation may be one or several aspects of ongoing or completed projects, programs, action plans, or policies"
All three institutions define the evaluation as an assessment of the project's design, implementation and results. The OECD and the DAC also describe the function of the evaluation: to determine efficiency, effectiveness, impact and sustainability and to guide the decision-making process.
2.2. Ideas on evaluations since the 1960's
The evaluation of development aid has undergone a number of changes over the years [NDC 1995: 14]. In the 1960s and 1970s project-oriented evaluation was dominant. Most studies were relatively simple in design, particularly those considering unisectoral activities:
9 The Evaluation Report by T.H. Bakker & L.C. Gorringe
developing physical infrastructure and improving agricultural and industrial productivity. Evaluations of such interventions naturally tended to concentrate on analyses of economic costs and benefits. The advent of multisectoral projects aimed at target groups, the increased role of institutional support in aid activities, and the move towards a more process-oriented approach to implementation brought a change in the nature of evaluation. Determining the immediate and wider effects of such interventions calls for complex research methods and multidisciplinary evaluation teams. The 1980s brought new forms of development aid, among them program aid and macroeconomic support, whose evaluation also demands a special approach.
Donor organizations nowadays, are sponsoring complex activities throughout the world in hugely differing social, political, cultural and economic environment [NDC 1995:11]. This together with a shift towards a more pragmatic approach to development assistance, as opposed to the largely ideologically inspired approaches of the past, has lead to a sharper focus on the effectiveness and efficiency of aid activities, which is closely linked to a traditional, rationalism view of the organization. When donors assess the performance of aid they usually do it in four dimensions, all of which will be compared with alternative uses of the resources: achievement of objectives overall impact cost-effectiveness general development results The task of the evaluation is to assess performance along these dimensions and weigh them in order to arrive at a balanced judgment of the performance.
Policy proposals [Stokke&Cass 1991:74] that have "value for money" implications, especially those relating to large public expenditure decisions, must now state what is to be achieved, by whom, at what cost, and how it will be measured. Also, there should be a relationship between the cost of the evaluation and the cost of the project being evaluated [SIDA 1997:40].
Responsibility for the execution of the evaluations lies increasingly with the beneficiaries. This requires distinct capabilities. It is our opinion that it is within the responsibilities of the donors to provide the competencies needed to perform these evaluations in a satisfactory manner. The DAC report from 1988 [OECD/DAC 1997:53] already observes a significant strengthening of the evaluation process in many developing countries. A sincere and long- term commitment to strengthen evaluation capacity in developing countries means leaving more of the initiative and design of evaluations to the recipients [OECD/DAC 1997:54]. While much of the assessment of effects and impacts should be left to the recipient country, donors will continue to have needs for evaluation both for accountability purposes and to provide lessons on the adequacy of their aid delivery systems.
One example is the Evaluation Capacity Building, or ECB, programs that are being realized by the Worldbank, among others. Donor support strategies for Evaluation Capacity Building (ECB) [OECD/DAC 1997:55], as given by the OECD, include the guideline that the duration and scope of support should be flexible and balanced between needs for long-term relations and ownership by host institutions. It is also stated that consideration of support to either a national level evaluation or a performance auditing system should include policy demand for
10 The Evaluation Report by T.H. Bakker & L.C. Gorringe
its use and legislative backing of the system. An important element of the recent concern about the evaluation capacity in developing countries is the acknowledgement that donors and host countries each have their legitimate but sometimes different interests in evaluation.
It has, in particular, been the Worldbank and the regional development UNPO banks which have been most actively supporting national evaluation systems and stimulating the demand for evaluation in connection with public sector reforms and good governance initiatives. Multi lateral donors have tended to focus their interventions at the national level supporting overall evaluation systems often as part of broader public sector initiatives. Bilateral donors have, on the other hand, been more inclined to support evaluation functions at the department or project/program level. Many donors also see their efforts to make joint evaluations as means to support capacity. The DAC Expert Group on Aid Evaluation already has produced a compendium of Donor Practices, with the intention of helping the developing countries when setting up their own evaluation units. Additionally, the first international training seminar on evaluation methodology has been held.
The latest development exists of the so called Fourth Generation Evaluation [OECD/DAC 1997:98], where the evaluation becomes a forum for debate, in which all stakeholders should be involved. It is a learning process involving different sets of understandings that need to be negotiated. The outcome of the evaluation is not a list of conclusions based on "objective" value judgements but an agenda for negotiation based on the claims, concerns and issues that were not resolved in the evaluation dialogue. This demands a wider framework in which "truth" and "fact" are recognized for its subjectivity.
2.3. The function of evaluations
Evaluation is, above all, a tool for decision making. It can be used by donor organizations and by politicians both to justify aid allocations and (to measure) to what extent aid actually works [NDC 1995:14]. An evaluation is an activity for finding out the value, or result, of something. It is an activity that answers the information needs of various actors in the organization. The basic function of an evaluation is to answer questions regarding planning, monitoring and implementation, impact and efficiency. It not only is a learning tool to improve future aid policy and interventions, but also provides a basis for accountability.
The function of evaluation within the aid agency can be described not only from what it is supposed to measure, but also in light of the function it may have in the agency [Carlsson 1994: 9-10]. 1. It is an instrument for monitoring and should therefore contribute to an improvement of the activities. Within this context we can identify four major functions: Evaluation can be warning signals. They call to attention immediate problems, or point out risks for developments within a project, which may run contrary to the initial intentions. Evaluations can also serve as guides and point out the direction towards improved programs and projects.
11 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Evaluations can be concept innovators, proposing new models and departures for aid projects. At a higher level they can offer new frames of reference. Good evaluations can finally serve as both sparks and fuel for debates on programs, policy and strategy. 2. The feedback of the knowledge generated by evaluations on program design and strategy is a crucial question when it comes to ensuring effective aid. Analysis and feedback from previous development assistance projects should be instrumental in planning and preparing for new projects [NDC 1995:14]. It is not enough that the feedback works at the level of the project officer; it also has to work at higher levels in the decision hierarchy. Evaluations should also provide information about the results of aid to those providing the resources: the general public. Evaluation can also have a control function [NDC 1995:14]. The actual aid activities take place in developing countries and are, as a rule, undertaken by institutions which differ from donor organizations in their administrative structures and decision-making processes The need for control is considerable, and evaluation is seen as instrumental in this respect.
These functions may not always be compatible. The need to improve future aid policy, programs and projects through feedback of lessons learned can seem to conflict with the task to provide information to the public and a basis of accountability. Critical evaluations may seem damaging in the short run, giving the public an idea of the ineffectiveness of the organization. In the long run these evaluations can lead to a stronger, more efficient organization. It is therefore imperative to clarify how evaluation leads to better results in the future. The evaluation in itself is controversial. A critical evaluation gives the organization opportunities to learn from the past. The same critical evaluation could result in diminished allocation of funds for the organization. The risk exists that evaluations become opportunistic; a report, with an objective to please the reader, rather than critically examining the project itself. Summing up there is no doubt that evaluation, as an activity, is especially responsive to outside pressures [Stokke&Cass 1991: 78]. This will always be the case as long as evaluation is required to meet, not only the objective of improving the quality of aid administration and implementation, but also of satisfying the need for others to be assured that “aid works”.
Recent studies by the OECD have shown that evaluations have little impact on aid, and that the learning effect in the aid administration has been small [OECD/DAC 1997:73]. All evaluation efforts are wasted if there is no change in organizational behavior as a result. The most common reasons at SIDA to initiate an evaluation [SIDA 1997:46] are: (a) it is decided in the project document; (b) it is part of the agreement with other stakeholders; (c) it is interesting to have in the evaluation plan; and (d) it is needed for input to country strategies. It is striking that the need for feedback and learning lessons for future projects is not among these reasons.
Summary
The focus of the evaluation of DA projects is shifting from a strictly donor orientated affair, to the realization that all stakeholders should have the possibility to influence the goals set for the evaluation, and the way data is collected and the conclusions are drawn.
12 The Evaluation Report by T.H. Bakker & L.C. Gorringe
3. The evaluation report
The evaluation report is an assessment of the project’s achievements concerning relevance, impact, effectiveness, efficiency and sustainability. For this, a succession of methods are used: terms of reference, logical framework approach, cost-benefit analysis, cost-effectiveness analysis and lately the fourth generation evaluation has been developed. Participatory evaluation is slowly becoming the norm as it is described in many policies. When discussing the different aspects of the evaluation report we start with a description of the general guidelines, after which we examine the implementation.
Introduction
The OECD [OECD/DAC 1997:74] distinguishes four different types of evaluations: 1. Expert evaluations are traditional ex-post evaluations 2. Process evaluations include reviews and information feedback during project implementation, important for pilot projects 3. “Problem oriented” evaluations deal with selected issues (for example: institutional issues, gender, poverty, etc.) 4. Participatory evaluations use facilitators. Project staff can be used, when the monitoring and evaluation is part of the project itself.
3.1 General goals
3.1.1 General guidelines
It is essential that an evaluation of DA projects addresses the following five issues [MFAF 1997:63-64]: 1. Relevance: does the project make sense within the context of its environment? Are results, purpose and overall objectives of the project in line with the needs and expectations of the beneficiaries? 2. Impact: has there been a change towards the achievement of the overall objectives of DA as a consequence of the project? 3. Effectiveness: how well has the project goal been achieved? 4. Efficiency: how is the relation between the results and the means, i.e. has the process been cost-effective? 5. Sustainability: to what extend do the benefits produced by the project continue after the external assistance has come to an end?
13 The Evaluation Report by T.H. Bakker & L.C. Gorringe
3.1.2 Examination of implementation
A Dutch study [NDC 1995:38 f.] analyzing 180 reports from the period 1987- 90 indicates that the amount of attention devoted to the efficiency and effectiveness of projects varied. In most cases evaluations lacked the data, indicators and standards needed for quantitative analysis and therefore limited themselves to simply giving impressions. In general, evaluations showed little concern with efficiency. Some reports included remarks on more efficient alternatives, but they tended not to be underpinned by figures or analysis. The following factors made it difficult to determine effectiveness and impact: (a) inadequate project preparation; (b) flawed analysis of the problems addressed; (c) a lack of standards against which results could be measured and (d) inadequate descriptions of starting positions.
Only about fifty percent of SIDA’s evaluations include an adequate analysis of the achievement of project objectives [SIDA 1997:48]. There is a clear gap between the evaluation reports and the official policy that the evaluation should cover “relevance, goal achievement, effects, efficiency, sustainability” [SIDA 1997:44-45]. The Dutch study [NDC 1995:38] shows that while virtually every evaluation looked at whether projects had achieved their objectives, in most cases the analyses of effectiveness and impact were unsystematic and incomplete. Most reports considered whether planned outputs had been achieved, but these assessments were generally presented without quantitative data and, hence, were inadequate. According to the Tenth Annual Review of Project Performance Audit Results 1984, the Worldbank observed that only 42 percent of its projects had been implemented more or less as planned [Stokke&Cass 1991:89]. The rest had been changed in some significant way. The question is whether success is to be judged by the degree to which the original objectives were achieved.
The Dutch study ascertained that little attention was focused on long-term effects. For most part, evaluations dealt with short-term goals, and whether or not they had been or were likely to be achieved. Judgements about sustainability tended not to be backed up by solid arguments.
One of the problems concerning evaluations is the fact that many development projects do not have precisely defined moments when the project begins or ends [Carlsson 1994:197]. They often have the character of a continuous process rather than a fairly limited input in time. This complicates the decision of when to start evaluating in order to make a reasonable assessment. It is often impossible to decide, a year or two after the aid has ceased, whether a project is successful or not. It is often simply too early to form such a judgement [Stokke&Cass 1991: 89-90].
3.2 Methods used for the evaluation of development assistance projects
3.2.1 Terms of reference
3.2.1.1 General guidelines
14 The Evaluation Report by T.H. Bakker & L.C. Gorringe
The terms of reference contain the most important aspects of an evaluation. They form the basic structure of the evaluation report, which must, in principle, answer the questions presented in the terms of reference.
The general format for the terms of reference for evaluations [MFAF 1997:71] is as follows: 1. Subject of the evaluation 2. Background of the evaluation 3. Evaluation issues 4. Compatibility and sustainability 5. Methodology 6. Evaluation team 7. Reporting
3.2.1.2 Examination of implementation
Using the terms of reference as a format for evaluations seems to be a generally applied and helpful method. Program officers at SIDA [SIDA 1997:34] found that the final report is true to the intentions of the terms of reference.
3.2.2 The Logic Framework Approach
3.2.2.1 General guidelines
In the Logic Framework Approach (LFA) the project is broken down into a hierarchical structure of objectives [SIDA 1996b:8]. The elements of the project are listed in a logical order from activity level to development objectives. Each level is linked to the next as shown in the figure below.
Planned: Accomplished:
Development objectives
Project objective Fullfillment of project objective contributes in the long term to achievement of development objectives.
Results Achievement of planned results leads (goods and services to achievement of project objective. received by target group)
Activities Implementation of activities according to plan of operation permits achievement of planned results.
Inputs If sufficient relevant resources are available, the activities can be implemented.
15 The Evaluation Report by T.H. Bakker & L.C. Gorringe
The LFA [NDC 19953. 8] is a way of structuring the main elements in a project, highlighting logical linkages between intended inputs, planned activities and expected results. LFA is therefore one of several tools to be used during project preparation, implementation, and evaluation, and can render continuity to the process. LFA can be used for evaluations provided that it has been part of the project’s process from the beginning.
LFA is a logical, analytical instrument, which facilitates the task of identifying problems and formulating objectives [SIDA 1996b:2]. The method demonstrates the connection between objective, means and results and is a well tried and tested instrument for structuring planning and reporting results and costs. LFA is used to identify problems and needs in a certain sector of society; it facilitates selecting and setting priorities between development projects; aids planning and implementing of development projects effectively; and follows-up and evaluates development projects.
The systematic application of LFA, with good judgement and sound common sense, will help to improve the quality, efficiency and sustainability of development assistance [OECD/DAC 1997:1]. Both economic and financial analyses require that a project’s objectives and success indicators are clearly defined [MFAF 1997:16]. Systematic use of the logical framework technique makes this task easier. Analyses conduced during the planning and appraisal phases of the project must be followed up systematically. LFA has the aim of improving the quality of project operations and can only achieve this if the user has a good grasp of the method and uses it throughout the entire project.
One of the dangers of the LFA approach is that it may encourage project managers to take too static a view of projects [Stokke&Cass 1991:92]. It could be seen as a “blueprint” rather than a starting-off point for projects that may well have to change shape in response to changing circumstances. One of the key lessons of evaluations of people-centered projects is that management has to be highly flexible and quickly responsive to the experience as the project is implemented. This implies that the LFA matrix has to be revised frequently if it is not to become a brake on flexible management.
SIDA is one of the donor organizations that applies LFA. When using LFA it is presumed that the owner of the project and the donor of development assistance are clear about their respective roles. The owner of the project can be the recipient organization which receives support from SIDA, or the target group which the project is intended to benefit. This is where the responsibility lies for the planning, implementation and follow-up of the project [SIDA 1996b:1].
3.2.2.2 Examination of implementation
Bo Olin at the Stockholm House of Sustainable Economy [Olin 27-11-1998] has examined a large number of SIDA’s evaluations and has come to the conclusion that, although LFA is extensively used in the planning phase of DA projects, there is no continuity in its use. Evaluations do not refer to the objectives stated in the LFA, when assessing effectiveness.
16 The Evaluation Report by T.H. Bakker & L.C. Gorringe
3.2.3 Economic Analyses
3.2.3.1 General guidelines
The financial sustainability of a DA project is assessed by applying economic analyses, which measure the impact of a development project on the partner country’s overall economy and/or on a specific sector [MFAF 1997:15]. The following steps can be recognized: the identification and consistent valuation of all costs and benefits phasing costs and benefits over time evaluation of project acceptability
The two methods most frequently utilized are the cost-benefit analysis (CBA) and the cost- effectiveness analysis: CBA compares the benefits of a project with its costs. If the net return is greater than the alternative use of the money, the project is successful [Carlsson 1994:14]. CBA is mainly applicable to ”hard” projects (for example infrastructure projects) in which benefits can be measured in monetary terms [MFAF 1997:15]. CBA is a normative exercise that should aid, but not dictate decision-making [Israelsson&Modée 1997:14]. Cost-effectiveness analysis determines price of a given benefit per identifiable unit. It allows comparison of projects with similar goals and assists in the identification of the most cost-effective way to achieve a specific objective. It can also be assessed for ”soft” projects such as health or educational programs. [MFAF 1997:15]
Financial assessments can be used both to decide whether or not a project is viable and should be carried out, and to determine afterwards whether or not a project has been successful or cost-effective.
3.2.3.2 Examination of implementation
Cost-effectiveness is seldom assessed in SIDA’s evaluations [SIDA 1997:48]. Less than 20% of the evaluations analyze the aid agency’s performance. Program officers [SIDA 1997:34] consider the analyses of impact and cost effectiveness to be disappointing.
3.2.4 Fourth Generation Evaluation
3.2.4.1 General guidelines
The fourth generation evaluation is a form of negotiation that includes all beneficiaries [OECD/DAC 1997:98]. It transforms the evaluation into a process of empowerment that offers opportunities for furthering our understanding of the possibilities of a participatory agenda. It includes the following steps: 1. Identify the full range of interested parties. 2. Find out how the evaluation is perceived – stakeholder claims and concerns. 3. Provide context and a methodology through which these claims and concerns can be understood, taken into account and constructively criticized. 4. Generate as much consensus about different interpretations as possible. 5. Prepare an agenda for negotiation.
17 The Evaluation Report by T.H. Bakker & L.C. Gorringe
6. Collect and provide the information requested in the agenda. 7. Establish a forum of stakeholders in which negotiation can take place 8. Develop a text available to all. 9. Recycle evaluation to take up unresolved issues.
This method of evaluating DA projects is focussed mainly on learning lessons for future cooperation and less on accountability.
3.2.4.2 Examination of implementation
We have not been able to find any information regarding if and how this method is being applied in evaluations.
3.3 Data Collection
3.3.1 General guidelines
Data collection methods [OECD/DAC 1997:25]: 1. Traditional quantity methods existing of sample surveys with standardized questionnaires and interviews. Important data can be; (a) the number of project beneficiaries; (b) frequency of project meetings; and (c) beneficiary contributions in terms of labor, money or materials, distribution of benefits, etc. 2. Qualitative methods carried out through direct observation by evaluators. Data is collected through; (a) participant observation; (b) group discussions; (c) key informant or resource person interviews, where choosing the right person to be interviewed and interpreting their views correctly becomes crucial; and (d) field workshops
Data collection for an evaluation can seldom cover the whole range of stakeholders who are effected by the intervention [MFAF 1997:72]. Defining how to choose a sample becomes therefore essential; its size and nature determine whether it is representative or not. Using a control group which has not been affected by the project and comparing it with a group of project beneficiaries before and/or after the project takes place, makes comparison possible. Also, since the use of a control group is costly and difficult, it is common to only study the situation before and after the intervention. Results can be compared with other similar projects. Without any comparison with a control group, the result is quite predictable: the project had some impact on various aspects but could have done better in others.
3.3.2 Examination of implementation
SIDA’s evaluators concentrate on stakeholders closest to the project [SIDA 1997:37] (the project staff); the range of consultations seems to be very narrow. In general, the OECD has observed that receiving relevant information during project implementation seems to be a major concern for the donors.
18 The Evaluation Report by T.H. Bakker & L.C. Gorringe
3.4 Evaluation results
3.4.1 General guidelines
An evaluation activity can take the shape of working documents, weekly letters, brief studies, scientifically based investigations, etc. It should answer to the following quality standards [SIDA 1997:33-41]: Utility - the evaluation is informative, timely and influential. It defines its audience in order to be able to respond to their needs. The information provided is relevant and useful to stakeholders in the project. Information is timely with regards to the decision making process. Feasibility - the evaluation’s design is operable in field settings. The evaluation does not consume more resources, materials, personnel, or time than is needed to address the evaluation questions. The evaluation is properly designed for field conditions and makes the most of available human and financial sources. Propriety - the rights of the individuals affected by the evaluation are protected. The evaluation is sensitive to and warns against unlawful, unethical and unsuitable actions by those who conduct the evaluation. Aid is a relationship between two parties with unequal access to resources. This in itself introduces potential conflict between the parties. Accuracy - the evaluation is comprehensive. It considers as many of the program’s identifiable features as is practical. Information is technically adequate. Judgements are linked logically to the data. The working hypothesis and any assumptions, made regarding the research task are clearly stated.
Evaluation is a subjective activity [Carlsson 1994:11]. The evaluation process starts by somebody asking questions, which need answers. One of the criteria for good evaluations is therefore whether the user of the evaluation gets the information that will enable him or her to make a good decision. Credibility of an evaluation depends on the expertise and independence of the evaluators and the degree of transparency of the evaluation process [MFAF 1997:71- 72]. Evaluators should have the required expertise to make relevant and realistic conclusions and recommendations, based on the findings. The reliability of the findings, in turn, depends on the empirical evidence the evaluators can show to support the findings, including sources of information. Evaluation reports must show the evidence and clearly distinguish between the findings and conclusions/recommendations.
3.4.2 Examination of implementation
Utility Concerning SIDA’s evaluations, project officers are of the opinion that it is positive that the evaluations are produced in time and in English (which is a language understood by most stakeholders) [SIDA 1997:38]. Also, they found reports to be well written and well organized. The information is easily accessible. It turns out, however, that not many program officers and evaluation coordinators have consulted SIDA’s evaluation policy, and several reports do not meet the minimum standards for evaluation reports.
19 The Evaluation Report by T.H. Bakker & L.C. Gorringe
SIDA’s program officers [SIDA 1997:36] appreciate that evaluations have a project-specific orientation, that they concentrate on project achievements and that they have a perspective that places the project in a country context. At the same time program officers observe that evaluations fail in assessing the impact of the project on the surrounding society. Also they find that evaluations often add little if any new knowledge about the activity that is being evaluated. This may be due to the fact that only project officers are being interviewed.
Feasibility The costs of SIDA’s evaluations are not in any way related to the size of the projects [SIDA 1997:41]. We have not found any figures to express which part of the total project budget is destined for performing evaluations. It seems to be impossible to measure the cost- effectiveness of the evaluations since this should link the cost for an activity to a clearly identifiable output, something that is hard to determine when it concerns evaluations.
Propriety Powerful stakeholders can be expected to exercise an influence that may make the evaluation more useful to them [SIDA 1997:38]. In fact, beneficiaries are not very important for the evaluators of SIDA’s DA projects. The fact that only project officers are being consulted confirms this observation. Aid evaluations turn out mainly donor-oriented activities, primarily produced for internal consumption and use within donor organization. Statistics show that 90% of the evaluations give recommendations for the donor organization, while only 43% of the evaluations give recommendations for the recipient organization. This makes evaluations considerably less useful for recipient organizations. As evaluations are to such a large extent a donor-driven affair, one could even suspect that the recipient party – rather than the donor agency – is felt to be under scrutiny [SIDA 1997:41].
Accuracy Evaluations suffer from the fundamental difficulty that at the end of the day it is a matter of someone making a personal judgement [Stokke&Cass 1991: 89-90]. A test was arranged for several people to independently do the scoring of a specific project to see if the results were reasonably close. Unfortunately they differed quite significantly, and the author adds that “indeed it would be surprising if everyone were to form the same judgement about such aspects as sustainability, or the degree of help to the poorest”.
The general absence of a coherent methodology [NDC 1995: 89], the lack of decent statistical material and of indicators and standards against which the results of a project can be measured, means that assessments rest predominantly on the impressions of the evaluators. As a result, the information contained in evaluation reports can be both shallow and narrow.
Experts in the relevant technical field carry out most evaluations [NDC 1995:19], but only a minority of them has specific expertise in evaluation. Hence evaluations are concerned with the achievement of technical objectives more than with social and economic goals. They are generally snapshots of projects in progress, which emphasize implementation and management rather than analyze their immediate and wider effects.
Concerning SIDA’s evaluation it has been determined that no descriptive information is used and that source references are missing [SIDA 1997:39]. Also, few consultations are explicit in accounting for the data sources used and how they arrived at their conclusions. The quality of the evaluations is low [SIDA 1997: 44-45] and many times there are no good reasons to
20 The Evaluation Report by T.H. Bakker & L.C. Gorringe
believe the evaluators. They do not present any raw data and they do not inform the reader how they have arrived at their conclusions. The methodological awareness is low; whether we believe them or not is a matter of how well we know them, and of how well their conclusions coincide with what we believe ourselves to know. At the same time the program officers seem to perceive the same evaluations differently [SIDA 1997:36]. They consider evaluations to give a balanced and objective view and the results to be very reliable. In their opinion data and sources are used in such a way that it is clear to the reader how the results are obtained.
3.5 Participatory evaluation
3.5.1 General guidelines
The DAC Expert Group on Aid Evaluation [NDC 1995:14] promotes that aid activities should be evaluated jointly by the donor and the recipient. Better ownership implies that the decision and control are increasingly in the partner country [MFAF 1997:18]. Furthermore, they should be transferred from intermediary, implementing organizations in the partner country to the beneficiaries themselves. Development must be a participatory process, which means that the various stakeholders should influence and share control over the development initiatives, decisions, and resources which affect their lives. Participatory techniques (group discussions, key informant interviews, field workshops) increase the influence of different stakeholders on the evaluation findings and improve their credibility and usefulness [MFAF 1997:73].
In participatory evaluation a variety of beneficiaries and other stakeholders actively take part in the different phases of the evaluation process: Determining the evaluation objectives. Selecting procedures and data collective methods. Making recommendations and decisions on which actions are taken. Participatory evaluation transforms the evaluation process into an opportunity to negotiate, to learn and to be empowered to take action. This is an obvious move towards fourth generation evaluation.
It is often believed that by assigning somebody not directly involved in the project/program under scrutiny - an independent - objective evaluation of the project will be produced [Carlsson 1994:11]. What happens, however, as the result of the external independent evaluator’s work, is simply that we get yet another subjective analysis. Objectivity exists only in the sense that evaluators truthfully describe how they have conducted their investigation. The quality of evaluations depends on the methodology used and the expertise and impartiality of the evaluators chosen. Participation of different stakeholders increases the credibility of findings and usefulness of evaluation results [MFAF 1997:70]. Participation in evaluation should not be seen only as a means to improve the quality but also as a valuable end in itself [MFAF 1997:74]: by participating in evaluations of donor interventions, local organizations and individuals gain experience and skills which benefit national programs. Local evaluation capacity building serves both the donor’s and the partner country’s own efforts to increase transparency.
21 The Evaluation Report by T.H. Bakker & L.C. Gorringe
3.5.2 Examination of implementation
The OECD [OECD/DAC 1997:25] has observed an increasing emphasis on the importance of participation, especially in donor policy documents like participation guidelines and procedural notes. Also, there is an increase in the use of some participatory techniques in evaluations, such as the use of focus groups or key informants
Participation by evaluators from developing countries is usually limited [OECD/DAC 1997:25]. Despite participation rhetoric, there is very little evidence of genuine participation in evaluation. This may reflect unease on the part of donor agencies regarding these innovative approaches. It could also be that limitations are imposed by demands for accountability; the demands for objective results imply the use of external evaluators and traditional methods. The OECD [OECD/DAC 1997:25] recommends that participation must be mainstreamed by donor agencies into their operations and that evaluation tools need to be further developed.
Evaluation is initiated mainly by donors and carried out with help from foreign experts. Developing countries and those implementing projects see this as a one-sided instrument of control [NDC 1995:14]. Participation in donor agencies’ evaluations has generally been limited to mending a few rapid appraisal techniques [MFAF 1997:70]. Donor evaluations continue to be designed, largely with the donor agency’s management and accountability considerations in mind. In addition evaluations are conducted primarily by outside evaluators appointed by the donor organization.
Experiences with joint evaluations have been mixed [OECD/DAC 1997:54], and it appears that careful design is needed if joint evaluations are to give satisfactory outcome for both the donor and the host country, and at the same time contribute to capacity building with a lasting effect. In 1994 the Worldbank [OECD/DAC 1997:74] conducted a pilot study where, instead of an outside expert team doing the assessment, the various sectors of the public administration themselves were taking part in the assessment, with the aid of project leader. The Worldbank concluded that organizing assessments and evaluations with the participation of a greater number of people was not without problems. Therefore participatory assessments were therefore not recommended as a substitute to expert evaluations and assessments, but as a supplement.
The experience of the British governmental DA organization (ODA) [Stokke&Cass 1991: 88] concurs that the recipient countries prefer to remain at arm’s length. They tend to see donor- initiated evaluations as akin to audit, and they prefer to retain their maneuverability rather than becoming too committed through participation. They also have different criteria and objectives in evaluation, an example being a project in South Korea, aimed at providing village halls for public meetings. The halls were never in fact used for this purpose, so the donor decided that the project had failed. The communities themselves found the halls very useful for other purposes and they regarded the project as a success. The general rule for the ODA [Stokke&Cass 1991:85], in order to ensure an element of impartiality, is to choose mixed teams of evaluators; usually one or two in-house staff members with one or two outside experts. General guidelines [Stokke&Cass 1991: 95] state that evaluators should not show copies of evaluation reports they are preparing, to officials in the developing countries, or to anyone outside the ODA who does not have a direct involvement in the evaluation. If a developing country has supplied personnel to work closely with the evaluation team the
22 The Evaluation Report by T.H. Bakker & L.C. Gorringe
content of an early draft may be agreed with them, but not the final draft or final report. The Guideline also declares that, as far as possible, evaluation reports will be provided to the governments of the developing countries concerned and others who have a direct interest in the project.
SIDA’s evaluation teams generally do not include members from the recipient countries and initiative to evaluations is often taken without prior consultation with the recipient [SIDA 1997:41]. The official policy [SIDA 1997: 44-45] states that “partners in recipient countries should be engaged in the evaluation process: the evaluations should be undertaken in a spirit of cooperation”. In reality it turns out that in only 40% of the cases the partners were involved at the inception of the evaluation. They were seldom engaged in the evaluation process. In 50% of the cases the partners received a draft report to comment on. The idea that an evaluation should be make by an external, independent party has the following consequences [SIDA 1997:41]: The evaluation teams are rather dominated by persons with specific technical competence, normally men. Evaluators are consultants with long-term relationships to SIDA, which might influence their impartiality. Recipient countries are represented in the evaluation teams in 20% of the cases – however, not by persons representing a recipient, counterpart organization or the target group, but by local consultants.
Summary
There seems to be a significant gap between ideas on evaluating DA projects as described in policies and reports on the one hand and the implementation “in the field” on the other. In many cases project achievements concerning relevance, impact, effectiveness, efficiency and sustainability are being assessed inadequately or not at all. The available methods are used randomly and with great irregularity. The recipients are hardly ever involved in the evaluation process.
23 The Evaluation Report by T.H. Bakker & L.C. Gorringe
4. Cost-benefit analysis
The recognized method when conducting an economical analysis of a social project is the cost-benefit analysis or CBA. The fact that DA projects are carried out in often heavily distorted economies demands special modifications of the CBA. This is being considered in the social CBA, which strives to diminish this problem by giving additional importance to exchange rates, investments and distribution.
Economic analysis is used in order to find answers dealing with the relationship between society as a whole and a specific project [Carlsson 1994:12]. There are four different approaches that can be used that deal with this relationship. They are: 1. political economic analysis (PEA) 2. macro-economic analysis (MEA) 3. cost-benefit analysis (CBA) 4. cost-effectiveness analysis (CEA).
The first step in a public project analysis is a micro-economic examination, where the projects financial ability is tested [Carlsson 1994:14-16]. The next step is to incorporate welfare changes which are not accounted for in ordinary investment appraisal methods. In order to evaluate the losses and gains in economic welfare a CBA can be conducted. Its aim is to identify the impact on society if the project is undertaken. It is a tool to assess and improve the effectiveness of aid. The essential difference between CBA and ordinary investment appraisal methods used by firms is the use of economic prices, also called shadow prices or accounting prices, rather than market prices.
The general orientation of a CBA is clear [Carlsson 1994:16]. It attempts to set out and evaluate the costs and benefits of an investment project. It is a tool to assess and improve the effectiveness of aid. The typical procedure for conducting a CBA is similar irrespective of the technique used: 1. Identify the relevant costs and benefits of a project. 2. Quantify and value the identified items. 3. Compute selected indicators. 4. Test the results through sensitivity analyses and other indicators to arrive at a conclusion regarding the viability of the project.
CBA can be applied both in the planning stage of the project, commonly called appraisal, and after the project has been completed, usually referred to as evaluation [Carlsson 1994:12]. It assesses the viability of a project, and it can also be used to examine the comparative efficiency of different projects even in different economic sectors.
24 The Evaluation Report by T.H. Bakker & L.C. Gorringe
4.1 General principles of CBA
A cost-benefit analysis is a way of comparing benefits and costs of a project/investment in order to guide decision-makers regarding the feasibility of the project/investment [Israelsson&Modée 1997: 7]. The major steps of a CBA are: 1. The project definition 2. Identification of the impact of the project 3. Physical qualification of relevant impact 4. Monetary valuation of relevant effects 5. Discounting of cost and benefit flows 6. Evaluation 7. Sensitivity analysis
Step 1: The project definition The project is described within the outline of the project objectives [Israelsson&Modée 1997: 7]. The current status of the area of implementation should be described, as well as the project financing. When defining a project it is important to describe under which premises the analysis is being done [Mattsson 1988:13]. Once these have been defined one has to stick with them throughout the analysis. If there is a need for alteration or change of premises, a new analysis has to be conducted. Finally, in order to enable comparison, a null-alternative must be defined. The null-alternative describes the situation if no action would be taken.
Step 2: Identification of the impact of the project The impacts of the project are identified, on-site as well as off-site [Mattsson 1988:14]. The impact on different groups of the population is to be analyzed and a decision has to be made regarding which impacts are the most relevant. It is important to correctly assess whether the impacts are cost or benefits or merely an allocation of available resources.
Step 3: Physical qualification of relevant impact The physical amount of cost and benefit flows for a project are determined, and it is identified at which point in time costs and benefits will occur [Israelsson&Modée 1997: 8].
Step 4: Monetary valuation of relevant effects The monetary valuation consists of predicting the prices for value flows extending into the future [Israelsson&Modée 1997:8]. This involves the calculation of costs and benefits in real terms and, when these do not exist, the calculation of economic values.
Step 5: Discounting of cost and benefit flows Discounting is performed in order to compare costs and benefits in real prices over time and is often interpreted as a need to include inflation into the analysis [Israelsson&Modée 1997:9]. Inflation does not have to be considered in a CBA as long as relative prices are unchanged. Other reasons are, time preference and marginal productivity of capital.
25 The Evaluation Report by T.H. Bakker & L.C. Gorringe
If there are justifications for discounting costs and benefits there is still a need to find the appropriate discount rate. The most common discount rates used in evaluations are: The opportunity cost of capital (the market interest rate) The cost of borrowing money The social rate of time preference
Step 6: Evaluation There are three different methods [Israelsson&Modée 1997: 10] for evaluating the discounted benefits and costs: Internal rate of return Net present value Benefit-cost ratio
Net Present Value (NPV) and Internal Rate of Return (IRR) analyses the difference between the present values of a project’s flows of costs and benefits over time [Carlsson 1994: 16]. When using NPV the discount rate must be determined. On the other hand if IRR is considered the difference between the benefits and costs are set to zero. From this equation the internal rate of return can be calculated. While NPV is the difference in present value between the streams of benefits and costs of a project, the Benefit-Cost Ratio (B/C) is the ratio between the streams of benefits and costs.
The following criteria can be used in deciding if the project shall be executed or not [Mattsson 1988:73]: The NPV of the benefits exceed the NPV of the costs. The ratio between the NPV of the benefits and the NPV of the costs is larger than 1. The annuity of the benefits exceed the annuity of the costs. The internal rate of return is larger than the chosen discount rate.
In order for the above mentioned criteria to be feasible the following conditions have to be fulfilled: None of the projects are inter dependent of each other or excluding each other. It is defined when in time the projects begin. There are no restrictions, such as funding, tied to the project.
Step 7: Sensitivity analysis Risk and uncertainty often characterize decisions regarding the environment [Israelsson&Modée 1997:10]. The basic aim of the sensitivity analysis is therefore, to discover to which parameter the NPV outcome is most sensitive. Risk differs from uncertainty in the sense that risk refers to a situation where the probabilities of certain outcomes are known whereas uncertainty means that the probabilities are unknown.
26 The Evaluation Report by T.H. Bakker & L.C. Gorringe
4.2 Use of CBA in evaluating development assistance
A project in a developing country is often designed to meet several objectives, such as economic growth, social equality and economic integration as well as educational, health, cultural and environmental improvements [Carlsson 1994:24]. Such multiple objectives sometimes necessitate more flexible approaches than the conventional CBA can provide. Although the purpose, procedure and indicators of a CBA are straightforward, the methodological approach may vary. There are a number of methodological approaches available for how to use this tool. Two of these approaches will be described below, namely Traditional Cost-Benefit Analysis (TCBA) and Social Cost-Benefit Analysis (SCBA).
One particular characteristic of an economic CBA, as opposed to a financial CBA is, that a number of adjustments are undertaken in order to express inputs and outputs from the project according to their actual values to society, their opportunity costs [Carlsson 1994:17-18]. To express these values one needs both a benchmark model of the economy with which the economic environment affecting the project can be compared and a numeraire in which the values are named. The benchmark model traditionally used is one based on the neo-classical assumptions of perfect competition. The TCBA that dominated up to the end of the 1960´s expressed costs and benefits in the terms of domestic currency.
The most common adjustments that are needed to change market prices to shadow prices include [Carlsson 1994:17-18]: Exclusion of tariffs, subsidies and other transfers that do not impose any opportunity cost on society. Adjustment of over-valued or rationed foreign exchange. Estimation of opportunity cost of under- or unemployed labor assessment of excess profit from monopolies.
Where applicable, the theoretical prices under the assumption of perfect competition are taken as the norm for such valuation purposes [Carlsson 1994:17-18]. By estimating shadow prices the distortions in the economy become evident. The quality of the estimates and the difficulty in reaching them depend on many factors: 1. Estimates are more easily made if government intervention is made primarily through taxes and subsidies rather than by direct control. 2. Another important factor is the stability of the accounting ratios (shadow prices over market prices). With unchanging tax or subsidy rates these would be roughly constant. With direct controls, however, accounting ratios could be expected to fluctuate considerably, and any useable estimate would need to refer to an average situation. 3. The availability of official statistics and empirical studies decides the reliability of the outcome. 4. High inflation affects the relative prices and increases the difficulty in producing reliable up-to-date shadow prices. 5. The size and structure of the economy affects the estimates. The larger and more complex the economy, the harder it is to estimate the shadow prices. In practice it has proven to be very difficult and time-consuming to drive a complete set of shadow prices for a country.
27 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Although TCBA has come a long way in assessing projects it is far from perfect in a Third World setting [Carlsson 1994:19-20]. The major differences between SCBA and TCBA concern the treatment of (a) exchange rate, (b) investments and (c) distribution.
(a) In TCBA, domestic prices are the basis of valuation. Imports and exports are translated into the domestic currency through the use of a shadow exchange rate, which is estimated at the level where supply and demand for foreign currency is in equilibrium. In SCBA this conversion of trade and non-trade goods into a single denominator is improved.
(b) In TCBA, investment is presumed to have the same value as consumption.. SCBA gives the opportunity to attribute a higher weight to investments, which are expected to yield future growth and increased consumption.
(c) In SCBA, weights according to different income levels are introduced. Weights can be obtained by studying consumption patterns of different income groups.
The choice of approach for a specific assessment depends on a number of factors [Carlsson 1994:23]. Most important are the type and size of the project, the complexity and distortions of the economy, the data available and the skills of the analyst. It is not possible to standardize the choice of method due to the fact that there are so many obstacles to consider. The TCBA, being logical and straightforward, makes it possible for analysts to use this method without too much training. It is less dependent on the estimation of country-specific weights and conversion factors than the SCBA, but it is also less reliable in heavily distorted economies. For the assessment of projects in distorted economies, market prices are generally inadequate. Project assessment often requires an analysis of distributional effects. Therefore, TCBA is preferred in projects that are not trade oriented, or where distortion is limited. The demands on data availability and analytical skills are moderate, which increases the reliability of the analysis. SCBA demands the skills of a trained analyst with sufficient time and data. It is best suited for larger donor agencies or planning departments. It is complicated to use in complex, distorted economies but it is also the best tool for analyzing such settings. The method is especially applicable to projects with explicit distribution objectives.
4.3 Summary and conclusions
A CBA requires value judgements to start off the analysis [Carlsson 1994:192]. It is normative, resting on ethical and philosophical values. An old controversy in welfare economics is whether it is possible to find a set of value judgements, which carry nearly universal approval, yet are specific enough to lead to useful applications in real life. If it were possible to work with value judgements, which represent truly general welfare principles, CBA would come very close to being value-free. A CBA cannot stand above issues of conflict, power and politics in society. Its value judgements clearly reflect them, being usually the preferences or bias of the analyst or decision-makers, either the host government or the donor agency. Results should represent scientific and objective conclusions, although in reality they are largely, based on hunches, estimates and judgements. Also, one should be
28 The Evaluation Report by T.H. Bakker & L.C. Gorringe
aware of the fact that collected data can never be completely flawless. If relevant and reliable data are easily accessible, CBA has the potential of being applied more often and with greater accuracy.
However, reality represents a scenario far from the perfect conditions, which have to be fulfilled in order for the CBA to be a powerful and accurate tool. SIDA, has used CBA in evaluations in a minor amount of projects [Berlin 01-12-1998]. The latest DA project where this method has been applied is the much criticized project in Bai Bang, Vietnam. Here SIDA has done a full scale CBA. Consultants appointed by SIDA perform the actual analysis and it is the responsibility of these consultants to collect and judge if the data needed for the CBA is sufficient. If not, the consultants are free to make estimations to be used in the analysis. According to Anders Berlin [Berlin 1998], it is all the same who performs the CBA. Every CBA has a core of parameters that are always included.
The ambition of economic analysis is to present an assessment of reality so that decision- makers are able to choose among the options presented to them [Carlsson 1994:192]. A CBA results in a prescription about whether alternative A or alternative B should be chosen and the decision-maker is expected to follow the advice of the CBA. However, a CBA does not necessarily have to be comparative. A decision can be based on a single NPV figure.
A full-scale CBA is usually too voluminous to be read and digested easily [Carlsson 1994:197]. It is based on assumptions regarding certain parameters, which require experience in their interpretation. Programs are so complex, involve so many variables, and operate in such a constantly changing environment that the development and testing of specific and rigorously defined hypotheses may be neither feasible nor useful. According to Anders Berlin, SIDA, [Berlin 01-12-1998] CBA is a method, which can encapsulate all the parameters that are of importance to the project. The problem is to delimitate what is of more or less importance to the outcome. Thus the problem lies within the experience and knowledge of the evaluator, both regarding economical training and knowledge of the project, and the data available.
In our opinion there are three major issues that concern the accomplishment of a CBA. The first refers to the program officers performing the CBA: how well trained are they in economics and what is their experience in both the method used and DA projects? The second set of problems concerns the demands CBA places on quality of input data and knowledge of certain key variables. Finally an elaborate CBA is both time consuming and expensive. These three issues may affect the credibility of the CBA and it is therefore questionable if the CBA is a relevant method to use in evaluating DA projects.
The development of valuation techniques that would facilitate the inclusion of environmental values might bring more relevance to many projects where economic analysis is now being disqualified because of its focus on market transactions [Carlsson 1994:24]. Environmental problems raise particularly difficult issues in terms of economic appraisal. A high rate of return can have the effect of discouraging project planners from taking into account any likely environmental consequences, which may not become obvious until some years ahead [Israelsson&Modée 1997:13]. The further into the future costs and benefits occur, the lower their present value. Environmentalists assert that discounting costs and benefits will encourage the depletion of natural resources since high discount rates tend to encourage early, rather than later, depletion of exhaustible natural resources.
29 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Inadequate and dubious data are often used to calculate realized rates of return, and in any case the rate of return may well be high or low as a result of good or bad luck rather than as a result of good or bad project management. [Stokke&Cass 1991:86] The rate of return does not consider many important social or administrative aspects, for example, beneficiary streams or institution-building, and the realized rate of return is a relative concept, for example, two percent may be creditable in Ghana but not in Singapore. There is also the question of the ethical implication when estimating the realized rate of return. The contribution of aid to the relief of poverty needs to be considered separately. As far as aid is a contribution to fighting poverty, one should not expect to see aid linked to higher rates of economic growth.
CBA includes the costs and benefits over the whole life of the project. [Carlsson 1994:15] It does not give explicit notice to what is often called project sustainability, which is the ability to maintain an adequate level of net benefits after the investment phase is completed. A CBA that shows an appropriate return also implies project sustainability. If environmental and social values have been properly included in the analysis, it can indicate if the project is sustainable.
One of the main critiques of CBA is that everything in the analysis is being reduced to one unity, namely money. The CBA is a way of comparing what one must give up in order to achieve something else. This does not have to be measured in monetary terms, but might also be measured in terms of lives or leisure hours. CBA represents a useful method to compare different alternatives. The fact that money is often used as a measurement has a mostly practical purpose, and it does not mean that money is more important than anything else. The heart of the matter, though, is whether or not the method is believed to be useful for the purpose of designing projects in a Third World context. Apart from the fact that these countries often have heavily distorted economies, the analysis represents the concept that a society’s development lends itself to planning. As has been mentioned in 3.1.2, the reality of DA projects shows that most projects deviate from their original objectives.
30 The Evaluation Report by T.H. Bakker & L.C. Gorringe
5. Sustainable Development Records
The emphasis that has been put on sustainable development within DA projects during the last decade has drawn our attention to the sustainable development records, or SDR. This method is an operating tool for directing an activity towards an increasing sustainability. Although the method, as far as we know, has never been used for evaluations, the first models to aid monitoring of DA projects have been developed.
5.1 Introduction to Sustainable Development Records
For the last 200 years economics have recognized that there is a gap between the “demand value” and “supply value” of an activity. In order to fill this gap Sustainable Development Records (SDR) has resulted in the development of “responsibility value” [Bergström 1997:16]. Value can be decided upon from three different perspectives: Demand perspective: business transaction between a seller and a buyer. The price becomes an expression of value. Supply perspective: value is the effort put into creating the product or service. Responsibility perspective developed with SDR: “that which would be perfect is everyone would recognize what’s best for him/her”. It expresses the values that guide the business analysis of business strategies, policies and such.
The theory has evolved during the late 80’s when the need for companies to adjust to the limits set up by the environment became stronger and stronger [Bergström 1997:1]. Whereas traditional economical analyses are limited to managing monetary measures, SDR is a theory on economizing, no matter what resources one needs to economize with. It claims to be a method and economical logic that can be applied in any context where activities have to be economized, since information is used in the form it is available [Bergström 1997:11]. Since results are measured in ratio’s, there is no need to “translate” different data into monetary values before they can be measured.
The starting-point of SDR was sustainable development. The Bruntland report [Israelsson&Modée 1997:1] pointed out that today’s problems should be solved in such a manner that future generations have the equal opportunities to solve their own problems and the possibility to be, at least, as well off as the present generation.
SDR‘s main objective is to determine how to accomplish a business idea with the resources available. A central element in SDR [Bergström 1997:2] is the use of the balanced scorecard,
31 The Evaluation Report by T.H. Bakker & L.C. Gorringe
which shows complexity in a business in a standardized and accessible way. A scorecard consists of 4 key ratios: 1. The financial perspective 2. The customer perspective 3. The process perspective 4. The developmental perspective
SDR can be applied in the following two areas [Bergström 1997:2]: 1. Environmental issues, often addressed isolated from the rest of the business. The SDR methodology is a way to bridge the gap between “business” and “environment”. 2. Development of advanced economical systems, for internal accounting and reporting in order to support management and control.
5.2 Description of Sustainable Development Records’ methodology
All activity takes place within a cycle, nothing happens isolated [Bergström 1997:5-8]. Society exists and functions only as long as it has a connection to nature. There is a constant exchange (flow) of resources between the resource base and an activity: materials and energy from the resource base are converted into products and services. These in their turn generate waste products, customer satisfaction and changes to the environment. Economizing an activity focuses on fully utilizing the space between use of resources (the influence an activity has to the environment) and quality of production. In order to conduct a SDR analysis is has to be made clear what the cycle includes. Also, it has to be determined how value indicators will be tied to information about the cycle. The methodology of SDR is based on two economizing principles: Economize everything, not only money. Economize during all stages, from use of resources to fulfilling the goal.
SDR’s analysis of how well the activity economizes with the resources, focuses on [Bergström 1997:12]: 1. Effectiveness: how well does the activity achieve its goals? 2. Efficiency: how well are the resources used being taken advantage off? 3. Sustainability: can this way of using resources continue for a long time?
The level of sustainability of the activity has many dimensions and many different sorts of measurements are involved [Bergström 1997:10-13]. There is not one measure for effectiveness, there are many. Many concepts are composite; they can only be understood in relation to other concepts (“big” in relation to “little”). When the relationship is clear we speak of theoretical concepts, when the relationship is unclear we speak of diffuse concepts. Every “result follow-up system” is dependent on these composite concepts. Operational concepts (one measure for everything) will only lead to compromises and emergency solutions.
The principle of composite concepts calls for many different measures [Bergström 1997:14]. The need for an overview puts special demands on the accounting system. One has to realize
32 The Evaluation Report by T.H. Bakker & L.C. Gorringe
that it is impossible to have “objective” measures that can survive every form of objection. Also, it is a demanding task to create a standard that can be used for comparison between different parts of the company or between companies.
SDR is a model for multidimensional performance measurement, based on key indicator accounting [Olin 1998:31]. Efficiency, effectiveness and husbandry are the three, economical measures or key indicator types for the activity’s sustainability The key indicators illustrate three important aspects of performance namely: a) The effects or impacts on stakeholders b) The efficiency of the operations or the organized effort c) The quality and sustainability of the resource base. These aspects can be applied over different target areas such as social, environmental, financial and democratic achievements.
Every key indicator is a ratio with a numerator and a denominator. The numerator represents the desirable, while the denominator indicates what it is one wishes to economize with. This means a higher ratio is always more desirable than a lower one, and when measuring over longer periods of time, an activity is getting more sustainable as long as the ratios increase. An example of a key indicator could be the amount of paper that is being recycled. The numerator could be the amount of paper that is returned to be recycled, the denominator would then be the total amount of paper used. The key indicator now shows how sustainable the activity is.
There are three ground rules that make a systematic use of SDR reports possible [Bergström 1997:35]: 1. Every key indicator should relate to a well thought-out model of the activity 2. Every key indicator is an economizing ratio, that shows how much of the desirable has been achieved per unit of a resource. 3. Every key indicator should be constructed in such a way that a higher ratio always represents a higher rate of economizing.
When deciding on which key indicators to choose, SDR acts on the following basic principles [Bergström 1997:37-38]: Ratio principle: a key nominator consists of a numerator and a denominator. Non-violence principle: shows how little resources are necessary to achieve the desirable. Theory principle: every key indicator should be based on the SDR basic model “ Close-to-target hitting” principle: inexact and useful is better than exact and inapplicable. The system does not claim to be exact, but clusters of key indicators can give a pretty accurate picture of the direction in which the activity is developing. Periodical return reports give a detailed indication of the sustainability and of activity.
33 The Evaluation Report by T.H. Bakker & L.C. Gorringe
The SDR accounting system is created through four steps [Bergström 1997:39]. 1. Target areas: In every activity different target areas are to be identified. They consist of different perspectives on reality and are different ways of distinguishing clusters. Examples of target areas are responsibility areas, objective areas and technological areas. 2. Clusters: Clusters are constructions of key indicators that together shed light on a specific interest or target area. Examples of clusters are quality objectives, environmental objectives and equality objectives. 3. Key indicators: A key indicator consists of a ratio between two accounts. All the key indicators within a cluster together give an indication of how the activity is functioning within a target area. An example is the number of literate women divided by the total number of women involved in the activity. 4. Accounts: Different accounts register different kinds of data that give an indication of a relevant characteristic of the activity. An example is the number of literate women involved in the activity.
5.3 Usefulness of SDR in evaluating development assistance
Sustainable development has become one of the key issues in DA and should be an important part of every project. As a measure of development, economic growth in its narrowest definition is both insufficient and out of date [SIDA 1996a:2]. Nor is it possible first to discuss development – let alone economic growth – and leave the environment to later. Development and environmental protection must be regarded as two sides of the same coin.
SIDA’s development cooperation through sustainable development [SIDA 1996a:3] aims to assist recipient countries in the identification and implementation of activities which protect and conserve the natural resources and environment of the country.
An attempt to define sustainable development is made by Pearce et al. [Israelsson&Modée 1997:2]. Here it is maintained that sustainable development is a vector for different social aims: increases in real income per capita; improvement in health and nutritional status; educational achievement; access to resources; a “fairer” distribution of income and increases in basic freedoms. These vectors can suitably be translated into target areas with clusters of key indicators that give an indication on how a project is developing.
So far SDR has not been used in the evaluation of DA projects, as far as we know. Bo Olin [Olin 27-11-1998] has developed a number of SDR models for DA projects and finds the method very useful for the monitoring and follow up of projects. The following example makes the use of SDR for DA projects more concrete [Olin 1998:1-3]. The LFA matrix that was developed for this project forms the foundation of the SDR model. Target area: Supportive groups functioning. Cluster: Group forming Key indicators: Groups forming Gender balance (female enrolled members/enrolled members) Etc. Cluster: Awareness training
34 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Ex. key indicator: Awareness training efficiency (members trained/cost of training) Cluster: Skills training Ex. key indicator: Record keeping (groups maintaining records/groups formed)
If SDR is to be useful in evaluations, an accounting model needs to be developed for the project from the first implementation phases onwards. If evaluations are looked upon as something that exists apart from the project, something that takes place afterwards, SDR cannot be of much use. If, on the other hand, evaluations become a more obvious part of the monitoring process, SDR seems extremely useful. It gives a clear indication on the progress that is made over time. It is necessary that the evaluation objectives be defined already in the planning phase of the project in order to decide on the relevant key indicators and accounts. Also, since data is used in the way it is presented, reporting and accounting only demands basic skills, which certainly facilitates participation of all stakeholders and makes the evaluations process more transparent. In order for target areas, cluster and key indicators to be relevant, it seems without question that they have to be decided upon through cooperation between all stakeholders.
The following sustainability factors that have to be addressed in evaluations [MFAF 1997:66- 68] can unquestionably be translated into target areas, clusters and key indicators: Policy environment: the project must be in line with the partner country’s political, economic, environmental, social and cultural environment. Economic and financial feasibility: the resources employed must be used efficiently and effectively and the benefits should be sustained once external support has been withdrawn. Institutional capacity: a country should be enabled to utilize and manage its resources efficiently and effectively. Socio-cultural aspects: effect the success and sustainability of a development project. Participation and ownership: various stakeholders influence and share control and risk over the development initiatives, decisions and resources which affect their lives. Gender: the needs and roles of both women and men are fully recognized in the planning and implementation of projects. Environment: planners, decision makers, implementers and evaluators understand impacts and operate accordingly. Appropriate technology: must be socially and politically acceptable, affordable, “localized” (compatible with the available human resources), replicable, and ecologically sustainable.
Summary
SDR certainly seems to be a potentially evaluating tool for DA projects. The method is very suitable for measuring process in different target areas such as social, environmental, financial and democratic achievements. Since DA projects largely aim to make improvements within these areas, it is worthwhile to explore the possibilities of applying SDR throughout the planning, implementation and evaluation phases of DA projects.
35 The Evaluation Report by T.H. Bakker & L.C. Gorringe
Conclusions
Ideas about development aid have shifted from donor implementation to people’s participation in all phases: planning, implementation and evaluation. Although most donor organizations acknowledge the importance of participation in evaluation and state this in their policies, only few organizations practice what they preach. SIDA’s evaluations, for example, turn out to be purely donor-orientated affairs. They are initiated by the donor organization, usually implemented by a consultant with long-term relations with the donor organization, and mostly carry recommendations for the donor organization only. We consider it to be crucial that all participating organizations should be involved in all the stages of the evaluation process. If DA truly has become development cooperation it is imperative that all concerned parties agree on all the objectives that should be achieved. Both the objectives with the project and with the evaluation, since we are of the opinion that the evaluation should be a natural and self-evident part of the project itself.
All parties should use the same methodology. Although it is important for the donor to have insight into the activities of the recipient, it is also vital that the recipient has insight into the goals and objectives of the donor, since goals and objectives have to be compatible with the activities. The diversity and randomness of evaluation approaches is also confusing for recipient countries.
In order to be able to rate projects (i.e. compare them with each other), we consider it indispensable to use the same evaluation methodology throughout all projects. We are of the opinion that a centralized evaluation function (e.g. SIDA’s Department for Evaluation and Internal Audit) creates possibilities for the development of an overall evaluation methodology, stated in concrete terms, to be used for similar or maybe even all projects. Currently evaluations have a clear ad hoc character, with evaluators using arbitrarily chosen methods.
Evaluations are seen as activities that stand apart from the project. We find this highly regrettable. Evaluations are an obvious part of the development assistance project and should be conducted throughout the whole process. Monitoring and evaluating should be interwoven, with preliminary evaluations taking place throughout the entire life cycle of the projects. Instead, goals and methods for evaluation are chosen in a haphazard fashion, depending on the preferences of the evaluators. In SIDA’s case evaluators often completely ignore the official evaluation policy of the organization. Also, objectives and activities stated in the LFA, do not return in the evaluation, thus missing the opportunity of a natural connection to the data collected in the planning and implementation phases of the project. Furthermore, this prevents evaluations becoming a natural part of the development project. If evaluation goals and methods would be decided upon already during the planning phase of the project, it would be possible to gather necessary information during the implementation phase. This would make evaluations easier to execute and increase their credibility. As a consequence, the methods used for monitoring and evaluating projects should be uniform. In fact, evaluations should be conducted throughout the whole implementation phase, thus aiding monitoring and
36 The Evaluation Report by T.H. Bakker & L.C. Gorringe
rendering flexibility to the project. When the goals and objectives are decided upon by all stakeholders, the need for an “objective” outside evaluator disappears. In stead, the influence and input of all the different participants will guarantee that all the different parts of “the truth” are being represented. Evaluation becomes indeed a negotiating process. In our opinion SDR would fit well into this framework. Target areas, clusters and key indicators can be decided upon through cooperation and negotiation between all stakeholders. This would considerably increase the chances that all parties are going in the same direction.
The fact that evaluators often base their conclusions on their personal impressions, plus the fact that evaluations often lack clear reference to data on which conclusions are based, makes it hard to decide whether the report is valid or not. It might be that conclusions are based on facts, but, since data is missing, they might as well be mere personal reflections. In our opinion, this causes evaluations to lose their credibility. The fact that, for example, SIDA’s program officers perceive the evaluations as being reliable, balanced and objective doesn’t mean much, since, in most cases, they were the only ones interviewed by the evaluators. It is therefore quite understandable that the results of the evaluation reflect the ideas of the program officers, and most people consider their own ideas to be reliable, balanced and objective.
Evaluations have two functions: to learn from and to report results. Experience shows that the learning factor is minimal. We can identify the following possible explanations for this. (a) Planners and implementers are not interested in the results from evaluations. Evaluations are seen as a purpose in itself. Not as part of a project, but as a way to satisfy the people and the decision-makers. (b) The connection between the project and evaluation is so obscure, that the results of the evaluation lack relevance for the implementers. (c) The organization is incapable of communicating the result of the evaluation to the planners and implementers. (d) Planners and implementers assess evaluations as the personal opinions of the evaluators that lack relevance for the organization as a whole. (e) Or, as in SIDA’s case, since evaluations are based merely on the input of the program officers, they are not felt to contribute new knowledge to the organization.
The dual function of evaluations results in an inherent discrepancy. In order to learn from past experiences, a critical evaluation is needed. At the same time this critical evaluation might result in a diminishing willingness by, for example, the government, to allocate funds to the donor organization. There is a risk that the learning factor is sacrificed in favor of guaranteed future funding and, since the OECD has come to the conclusion that the learning factor of evaluations is minimal or even absent, this seems already to be the case. The two different goals do not have to be conflicting per se. If evaluations are used to improve the quality of aid administration and implementation, this should be enough to satisfy “others” and convince them of the fact that aid “works better and better all the time”.
We find that determining the rate of return and discounting costs and benefits presents a major problem when dealing with DA projects. These projects do not focus on yield in the same sense as economic investment does in the welfare state. Instead, the aim is set on helping the recipient to develop, and on distributing the resources among the total population. It seems cynical to put a price tag on how much the developed world is willing to pay in order to help the poor. A slight increase in income might have an immense impact when one is on the edge of starvation. In an economical estimate this increase might not amount to a significant benefit, while in real life it might be the difference between life and death.
37 The Evaluation Report by T.H. Bakker & L.C. Gorringe
The principle of rate of return is based on an ideal economy, so the main problem is that the economies in question are often far from this ideal. One could even speak of distorted economies. Therefore, deciding on which rate of return to apply, becomes tricky. There are many factors (heavily subsidized sectors in the economy, and an unrealistic rate of exchange, possibly even overvalued because of large cash inflows from donor countries) that make it very complicated. In addition, the principle of discounting tends to give misleading results, since it appreciates costs and benefits that lay in the near future higher than those that are further away in time. Costs and benefits concerning sustainability, both when it comes to the environment and project life span, often lay further away in the future. If the time horizon is far enough away, future costs and benefits ultimately will disappear completely, while these costs, when and if they occur, can be considerate, especially those concerning the environment, and they might even jeopardize the entire project. When looking only at short- term effects, sustainability can never be measured.
Another difficulty with CBA is determining which parameters are significant for the analysis and how to appraise them. In principle CBA intents to include all costs and benefits that can be derived from the project. In practice, this is not possible because of the limited resources available for performing the analysis. Therefore delimitation is necessary. The parameters chosen to be a part of the CBA dictate the final result of the assessment. In that respect, it becomes crucial who decides what should be included and what should be excluded. We anticipate that there is a difference in perspective when it comes to donors and recipients, which could lead to different results in the CBA. Both these perspectives are important if we are to come to an as objective view of reality as possible. The fact that evaluations are performed mainly by donor oriented evaluators leads us to believe that an important part of the costs and benefits derived from the projects are not included in the CBA. An additional problem is the fact that many parameters are hard to appraise in monetary terms, which leaves room for arbitrariness.
CBA also presents the problem of data collection. Extensive statistical information is needed, which usually isn’t available in the countries in which development assistance takes place, while the information available is not always reliable. The fact that these records often lack credibility, reduces the discount rate to a mere estimate. In SIDA’s case the evaluators make a personal assessment when the statistical information is lacking. This complicates the analysis to such an extent that we seriously question its suitability for the appraisal of social projects in developing countries.
Although evaluations have a high potential as learning tools, this potential is not being utilized. Evaluations have become a goal in itself and are basically costly reports to satisfy politicians and other decision-makers. This diminishes the function of the evaluation efforts and results in the report being not much more than a marketing tool. If the improvement of the evaluation is to make any sense at all, the learning capacity of donor organizations has to be developed. As long as there is no feedback towards the planners and implementers, further investment in the improvement of evaluations seems to be pointless.
38 The Evaluation Report by T.H. Bakker & L.C. Gorringe
References
Bergström, S., 1997. Hur går det? Introduktion till naturekonomisk företagsanalys. Stockholm. Berlin, A., 1998. Economist at SIDA’s Department for Evaluation and Internal Audit. Interview 1st December, 1998. Carlsson, J. et al., 1994. The Political Economy of Evaluation International Aid Agencies and the Effectiveness of Aid. London Croll, E. And D. Parkin. Bush Base: Forest Farm. Culture, Environment and Development. London 1992 Dickenson, J. Et al. 1996. Geography of the Third World. London. Israelsson, T. & K. Modée, 1997. Hidrovía Paraguay-Paraná – a micro perspective. Lund. Mattsson, B., 1988. Cost-benefit kalyler. Göteborg. Ministry for Foreign Affairs Finland (MFAF), 1997. Department for International Development Cooperation. Guidelines for Programme design, monitoring and Evaluation. Helsinki. Netherlands Development Cooperation (NDC), 1995. Evaluation and monitoring - Summary evaluation report. Den Haag. OECD/DAC, 1997. Evaluation of Programs Promoting Participatory Development and good Governance - Synthesis Report. Paris. Olin, B., 1998. SDR model for PEP Activity, Output and Effect/Impact Monitoring. Stockholm. Olin, B. 1998. Economist at the Stockholm House of Sustainable Economy. Interview 27th November, 1998. SIDA, 1996a. Department for Natural Resources And the Environment. Sida´s Policy on Sustainable Development. Stockholm. SIDA, 1996b. Methods and Institutioal Development Unit. Guidelines for The Application of LFA in Project Cycle Management. Stockholm. SIDA, 1997. Studies in Evaluation 97/1. Using the Evaluation Tool – a survey of conventional wisdom and common practice at SIDA. Stockholm. Stokke, O. & F. Cass (edit), 1991. Evaluating Development Assistance: Policies and Performance. London.
39