Smart tools for evaluating the performance of information products and services

Proceedings of a KIT/CTA/IICD/LEAP-IMPACT Technical Consultation, Amsterdam, The , November 2002

CTA Working Document Number 8029

CTA’s working document series consists of material that, in view of its immediate relevance and practical utility to specific readerships, the Centre wishes to make available without the delays inherent in the formal publication process. These working documents have not yet undergone technical editing by CTA and should be cited accordingly. Comments on matters of substance are welcome, and should be addressed directly to CTA.

Published and printed by: Technical Centre for Agricultural and Rural Cooperation (CTA)

Compiled by: Karen Batjes-Sinclair

Contents

Summary...... 1

Opening speeches ...... 9 Modupe Akande Research Professor, Obafemi Owolowo University...... 11 Hans van Hartevelt, Head, KIT-ILS...... 13 Sarah Cummings, KIT...... 16

Keynote addresses ...... 21 Jan Donner, President, KIT ...... 21 Carl B. Greenidge, Director, CTA...... 25

Part 1 Toolkit papers ...... 35 1.1 An evaluation framework for evaluating the performance and impact of information projects, products and services ...... 37 Ibrahim Khadar, CTA ...... 37 1.2 Indicators: between hard measures and soft judgements ...... 46 Andreas Springer-Heinze, GTZ...... 46 1.3 Production, validation, strategy and status of the toolkit ...... 53 Karen Batjes (Technical editor) and Sarah Cummings (KIT)...... 53

Part 2 Working Group discussions...... 63 2.1 Discussions on the toolkit...... 65

Part 3 CTA Resource book...... 75 The proposed CTA book on: Assessing the impact of information on development Kay Sayce , Publishing consultant ...... 77

Part 4 Future of LEAP IMPACT: A review of allied initiatives ...... 83 4.1 LEAP: Lessons learned, recommendations for next time ...... 85 Allison Hewlitt, Bellanet...... 85 4.2 Support for online collaboration and communities at Bellanet...... 88 Katherine Morrow, Bellanet International Secretariat...... 88 4.3 An introduction to LEAP IMPACT: A collaborative approach to developing evaluation practice in the information sector...... 92 Sarah Cummings, KIT...... 92 4.4 The FAO resource kit ...... 100 Byron Mook, ISNAR...... 100

Part 5 Future direction of the toolkit, resource book and LEAP IMPACT ...... 103 5.1 Future direction of the toolkit, resource book and LEAP IMPACT ...... 105

Annexes ...... 111 After action review (AAR)...... 112 Workshop programme ...... 114 List of participants ...... 132 Acronyms...... 125

iii

Smart tools for evaluating the performance of information products and services

Summary

Background

Information practitioners increasingly face the challenge of having to manage and evaluate the projects in which they are involved, without having the necessary training and tools to do so. The complexity of issues involved makes that task even more daunting. Recognising this, participants of the CTA/LEAP IMPACT technical consultation on Assessing the performance and impact of agricultural information products and services, in Bonn (October 2001), recommended that further work should be done in the field to develop 'smart practices' which would facilitate the work of practitioners.

Although the technical consultation in Bonn provided excellent material to further develop practical and cost-effective methods for evaluating agricultural information services, the focus tended to be concerned with measuring 'benefits' / impacts that the project has brought to farmers, agricultural marketing agencies and or scientists; as well as defining roles and relationships between output, outcome and impact. Further, a commitment was made to publish a resource book concerned with evaluation and impact assessment, which is currently underway. Less attention, however, was paid to the management and performance aspects of products and services, while issues of relevancy, efficiency, effectiveness and sustainability were only touched on.

Consequent on this, in November 2002, KIT, CTA, and IICD organised a workshop on Smart tools for evaluating the performance of information products and services at KIT in Amsterdam, the Netherlands. Other LEAP IMPACT partners were also involved in the initiative, many of whom represented various organisations in African, Caribbean and the Pacific (ACP) states, Canada and Europe. The workshop aimed to facilitate the development of a toolkit for managers of information projects as well as provide a complement to the publication 'Evaluating information: A letter to the project manager' (Mook, 2001 – CTA Working Document 8025). In addition to this, it aimed to encourage practitioners to share information and experiences, leading to improvements in the tools and methods used to assess information products and services. The expected outputs were to:

• Clarify key concepts and terminologies used in the management and performance of agricultural information products and services. • Validate the draft tools that will support information managers in their efforts to manage and evaluate the performance of specific types of information activities. • Develop a strategy for future development and refinement of the tools and publication of the toolkit after the workshop. • Select teams to develop and refine the additional tools and to draw up a timetable for development of the drafts. • Form a small Editorial Committee to further validate and refine the toolkit.

1 Smart tools for evaluating the performance of information products and services

• Promote the LEAP IMPACT approach to a wider audience of information and development practitioners in the Public Seminar. • Draw conclusions about the future approach of the LEAP IMPACT community of practice in the Open Meeting. • Provide a forum for discussion of CTA’s impact resource book.

The four-day workshop had three main elements:

• The first two days were taken up discussing the toolkit, the direction it should take and amending/ writing the tools. • On the third day, there was an Open Session where specially invited guests spoke on issues concerning LEAP IMPACT. This allowed the participants to discuss the issues facing the community. The afternoon session culminated in an Open Meeting, which offered the three organisations (KIT, CTA, IICD) the opportunity to come together to discuss and debate general issues on the evaluation of information with NGOs and the public. • On the fourth day, a group of experts (drawn mostly from the participants) came together to work on the CTA resource book, while the participants working on the toolkit developed a strategy for its further development. The two groups discussed future follow- up activities.

The first day of the workshop was opened by Professor Modupe Akande (Obafemi Owolowo University, Nigeria), Mr Hans van Hartevelt (Head, KIT-ILS) and Ms Sarah Cummings (KIT). Professor Akande thanked KIT, CTA and IICD on behalf of the participants for all the efforts made to ensure that the meeting was a successful one. Mr van Hartevelt gave an overview of KIT as a knowledge institute specialising in international cooperation and multicultural cooperation. He outlined the main aims of KIT and the role of the KIT-ILS in particular, in terms of capacity building for information services. Mr van Hartevelt spoke of the new structure of KIT-ILS and the new way in which it is being funded by the Dutch Ministry of Foreign Affairs, which has placed increasing emphasis on institutional cooperation in the South. Ms Cummings then gave an introduction to the workshop, placing it within the context of the LEAP IMPACT community of practice.

At the Open Session, there were three keynote speakers. Dr Jan Donner (President, KIT) spoke of the forces that have shaped KIT into what it is today. He elaborated on the need for organisations to be self-evaluating and self-learning and argued that monitoring and evaluation should take place regularly so that organisations are sensitive to the needs of its clients. Access to proper evaluation tools validated by peers was therefore central to the process. Mr Carl Greenidge (Director, CTA) outlined the primary role of CTA in terms of facilitating exchanges among ACP states with a view to improving the availability of and access to appropriate information, as well as improving their information and communication capacity. He said that CTA was required to be proactive under the Contonou Agreement and directed to pursue a number of new initiatives. As a result, CTA has been identifying niches of competence and exploiting these capacities. As part of the

2 Smart tools for evaluating the performance of information products and services

new initiatives, management has also been seeking to raise the awareness of critical policy issues and to reach a wider audience by means of a more extensive utilisation of electronic information services in conjunction with conventional means. Ms Lisette Gast (on behalf of the director of IICD) indicated that IICD was conscious of the importance of evaluation and the challenges posed in carrying out performance evaluations. As a consequence, she stressed the need for networks to continue to share and build knowledge in this area.

The subsequent panel discussion had as its central theme – Is the evaluation of information in your organisation really smart? The panel members were Mr Greenidge, Dr Paul Engel (Director, ECDPM), Mr Michael Polman (Director, Antenna Foundation), Mrs Christine Kalume (Healthlink Worldwide) and Dr Adiel Mbabu (ASARECA). The ideas presented were diverse and thought provoking, underscoring the complexity of issues involved in evaluating information.

Papers presented on the toolkit

Dr Ibrahim Khadar presented a paper on 'An evaluation framework for evaluating the performance and impact of information projects, products and services'. The paper provided a comprehensive multi-dimensional road map (Performance-impact evaluation of information programmes (PIP) now commonly referred to as Evaluation road map for information (ERMi) model), as a vertical dimension reflecting a transformational path from inputs to outputs, utilization, and direct and indirect outcomes. The horizontal dimension reflecting the scope and the corresponding methodological approaches for performance and impact assessment. The framework was well received by the participants, particularly because it gave insights into the distinction between performance and impact evaluation – concepts which are often confused and sometimes used interchangeably. Dr Andreas Springer- Heinze’s presentation on 'Indicators: Between hard measures and judgement' re-echoed the impact chain, highlighting both performance and impact evaluation. The presentation elaborated on appropriate variables and the corresponding indicators along the impact chain.

Mrs Karen Batjes looked at the 'Production, validation, strategy and status of the toolkit'. This entailed guidelines on how to prepare the tools as well as a classification system. This led to a lively discussion on the structure of the toolkit and, in particular, the division made between preparatory work and process. Following discussions, the participants accepted the classification, with changes to the positioning of the categories of some of the tools. Ms Margo Kooijman and Mr John Belt gave insights from past experiences with development and use of toolkits by KIT. Several tips were also given on how to approach the writing of the workshop.

CTA Resource book

In 2002 CTA initiated steps to produce a book on impact evaluation. The main idea of the book is that it will not set out to provide solutions, but rather pose questions on the approaches, concepts, issues and needs. During the meeting, Ms Kay Sayce (Publishing consultant) presented the background to the book and the way in which she, with the assistance of CTA, has gone about gathering information from various experts in different

3 Smart tools for evaluating the performance of information products and services

parts of the world. Currently, there are three groups of experts through which consultation on the book is done – the Steering Committee representing different regions; the Expert Group and the ACP Partnerships Group (consisting of people working in national institutions; field experts operating at the grassroots level). The book will have three main components – 'Context', 'Controversy' and 'Concourse' as well as an annex. The first draft of the book is expected in 2003, and the final version in late 2003 to 2004.

LEAP IMPACT collaboration

The presentation on 'LEAP: Lessons learned, recommendations for next time' by Ms Allison Hewlitt (Bellanet) traced Bellanet's experience with LEAP over the years, showing how they went about developing the platform. One of the main recommendations put forward was the need to understand the community being served in terms of who they are, their needs, and wants. Ms Katherine Morrow (Bellanet International Secretariat) looked at 'Support for online collaboration and communities at Bellanet'. She examined Bellanet's involvement in facilitating the international development community in using ICTs (mainly the Internet), using tools such as Dgroups and Postnuke. Key lessons drawn from this also underscored the importance of people being central to the process, (not the portal) and emphasised the need to invest time and money in facilitation, online skills and community building. The paper on 'An introduction to LEAP IMPACT: A collaborative approach to developing evaluation practice in the information sector' by Ms Sarah Cummings demonstrated how using the LEAP IMPACT platform has facilitated dialogue in such a way as to improve evaluation practice through the exchange of experiences and approaches taken by its members. And that continued collaboration and commitment of LEAP IMPACT members is necessary to further the work in this area.

Dr Byron Mook (ISNAR) introduced the FAO Resource Kit to the workshop as a series of training modules on selected topics in information management. An appeal was then made to the participants and the wider LEAP IMPACT community for contributions to the 'Impact assessment' module. Following on from this, the presentation on 'Water Information Summit and evaluation' by Ms Ingeborg Krukkert (IRC International Water and Sanitation Centre) called for the LEAP IMPACT community to prepare papers/ posters for the upcoming 'Sixth Water Information Summit' in September 2003, in the Netherlands.

Future direction of the toolkit, resource book and LEAP IMPACT

Toolkit • The focus of the toolkit is on performance evaluation and not on impact assessment. It is aimed at performance evaluation for self-assessment, motivated by self-learning. • Structure: the toolkit will contain six modules – glossary, introduction to the toolkit, preparatory tools, process tools, activity tools and case studies.

4 Smart tools for evaluating the performance of information products and services

• Immediate follow-up activities after the workshop:

- Mrs Batjes is to provide an update of the guidelines for writing the tools as soon as possible after the workshop;

- Ms Gast will determine how process tools such as data analysis, data collection, face-to-face meetings, focus groups, interviews, questionnaire design, stories/case studies, and other methods fit together and will change the sequencing accordingly. • Role of the Editorial Committee: When the tools are completed, the tools will need to be standardised and amended by the Editorial Committee. The Editorial Committee will also take the lead in production, testing and publication. • Validation of the toolkit: The toolkit will be promoted and disseminated to various target groups for testing by LEAP IMPACT members. Further, CTA, KIT, IICD will actively approach partners from both North and South to test the tools. • Publication and dissemination of the toolkit: First, the toolkit should be made available as a working document to all the participants. It should also be posted on the LEAP IMPACT workspace. It should then be later distributed as hard copy and on a CD- ROM. The expected time schedule is:

- First draft of the tools by 31 January 2003

- The tools should be finalised by the Editorial Committee by 30 April. On 1 May 2003, they will go to testing

- Testing should be completed by 1 July 2003

- Editorial Committee will amend the tools based on feedback received from testing

- Publication on CD-ROM/Web 2003

- Printing of toolkit by December 2003

- Hardcopies of the toolkit available in 2004 • Future tool development – coverage of the tools should be expanded radio, multimedia video and CD-ROMs. • Future meetings: there should be other workshops or regional meetings (if possible), to further the work currently being carried out on the toolkit. • It is envisaged that after the toolkit has been tried and tested, an attempt will be made to address the question of impact of information products and services. It was put forward that this work should be done within the context of another workshop setting.

CTA Resource book There was much debate about book content and its future direction. However, there was general agreement that the book will have three main sections and annexes:

• Context – the history of impact assessment trends and approaches. • Controversy – this will revolve around impact stories and studies, and analyses of these by experts.

5 Smart tools for evaluating the performance of information products and services

• Concourse – this will bring together the strands, commenting on where current thinking seems to stand, linking back to the contextual points and providing thoughts on the way forward. • Annexes – this is expected to include a bibliography, CTA’s evaluation programme and products, a glossary of terms, an inventory of current research, useful contacts, etc.).

The following points will also be taken on board:

• Defining the target audiences as well as key terms and concepts. • Addressing the relationship between the donor and recipient. • Addressing the need to use impact stories and impact case studies, with the actors involved at all levels and in all domains as the entry point. • Aiming for a book which both opens up and looks at differing views on impact assessment. • Taking care in sourcing, selecting and reproducing appropriate stories, and in setting up the analyses of these stories and studies. • Using impact stories as well as impact case studies, with the actors involved at all levels and in all domains as the entry point.

LEAP IMPACT The participants agreed that LEAP IMPACT has a role to play in facilitating the exchange and promotion of evaluation and impact assessment and as such they were committed to using this platform to as a medium to share, communicate ideas and debate issues.

Specific areas were identified to work on through which LEAP IMPACT could be strengthened as a community:

• Toolkits • Module D • Evaluation • Impact assessment • Food security • Health • Evaluating capacity development (ECD)

Other areas that will be addressed include:

• The issue of access – some members experience problems accessing the workspace.

6 Smart tools for evaluating the performance of information products and services

• Providing facilitation – Ms Cummings and Mrs Kalume are willing to do some of the facilitation for the time being. • Promoting LEAP IMPACT and the expertise of the group as a whole.

7

Opening speeches

Smart tools for evaluating the performance of information products and services

Opening remarks

Modupe Akande Research Professor, Obafemi Owolowo University

Good morning, Ladies and Gentlemen, welcome to this workshop on Smart tools for evaluating the performance of information products and services. We are all happy to be here and thank God for travelling mercies.

This meeting is a follow-up to the technical consultation on Assessing the performance and impact of agricultural information products and services, held in Bonn last year.

I appreciate, particularly, the fact that the organisers of the Bonn meeting kept their word in following up on the various recommendations made, of which this workshop is one. I would like to thank Dr Byron Mook for his untiring efforts during the year to update the Working Document Evaluating information: a letter to a project manager.

We are grateful to our hosts here in Amsterdam and all members of the organising committee for all the preparations that have been made to ensure the meeting is a successful one. And it is my hope that the outcome of our deliberations within the next few days will meet the objective of facilitating the development and use of practical and cost effective smart tools for evaluating information products.

As adequate funding is essential for the success of any meeting, we are very grateful to the Royal Tropical Institute (KIT), CTA, and the International Institute for Communication and Development (IICD) for providing generous support to make this validation and writing workshop possible.

Ladies and Gentlemen, we are privileged to have with us Mr Hans van Hartevelt, Head of Information Services of the Royal Tropical Institute (KIT). The department he heads has 55 members of staff and there is an extensive library collection. He is has carried out several projects in many countries and is particularly known as the Chinese specialist. Mr Hartevelt is also a novelist and it is therefore appropriate that I am calling on him to open this writing workshop.

We are also privileged to have Ms Sarah Cummings who is an information manager at KIT. She has worked in the information and development field for 21 years and has also been actively involved in impact assessment. She was also greatly involved in planning this inter- agency technical consultation and in managing the LEAP IMPACT workspace. No doubt many of us here have had interactions with her. It is my pleasure to give her the floor to introduce LEAP IMPACT.

Also present with us this morning is Ms Lisette Gast, representing IICD, where she is the policy officer for monitoring and evaluation. Ms Gast is also responsible for an evaluation

11 Smart tools for evaluating the performance of information products and services

project, ICT stories, which uses the power of story telling to share lessons learned. It is my pleasure to now invite Ms Gast to introduce the Workshop Programme.

Last but not least, is Dr Ibrahim Khadar who is acting manager of Planning and Corporate Services, CTA. He has been the big hand pushing monitoring and evaluation at CTA for many years and has been responsible for the links with other organisations to steer the ship of evaluation up to the present time. It gives me very great pleasure to invite Dr Khadar to give us his presentation on Evaluation.

Ladies and Gentlemen, on your behalf, I wish to thank Hans van Hartevelt, Sarah Cummings, Lisette Gast and Ibrahim Khadar for their various interventions. I thank you all for your attention and wish you a happy stay and exciting and fruitful meeting here in Amsterdam.

12 Smart tools for evaluating the performance of information products and services

KIT Information and Library Services (KIT-ILS)

Hans van Hartevelt Head, KIT-ILS

Welcome

Ladies and Gentlemen, it is a great honour to open this workshop. We are proud to welcome so many participants from all over the world to KIT, in particular, in this historic reading room.

It is encouraging to see that CTA, IICD and KIT – three organisations which are undertaking similar initiatives regarding the evaluation of the performance of information products and services, have combined their resources to organise and fund this workshop. This joint effort represents a follow-up to the first LEAP IMPACT workshop, which took place in Bonn in 2001.

As the host of this workshop, it is my pleasure to welcome you. It is also a welcome opportunity for me to present the background of this workshop from KIT's point of view. New funding policy It all started in the year 1999, the year the Euro was officially introduced in Europe. Based on new European Union (EU) regulations and in anticipation of the economic and financial integration of the EU, the Ministry of Foreign Affairs of the Netherlands changed its funding policy towards organisations such as KIT. The open-ended agreement between KIT and the Dutch government was discontinued and has been replaced by a four-year agreement, based on output funding. In other words, activities, which have been funded for many decades have had to be transformed into demand-driven products and services. Policy and objectives Not only the Ministry's funding policy has changed dramatically, the Ministry has also reviewed its own strategy. For KIT it was decided that the justification to fund KIT-ILS had to be found increasingly in the South. Many of you may know that our Library dates back more than two centuries, 250 years to be precise. Then, KIT was engaged in gathering and disseminating knowledge relating to the Dutch former colonies and later, other regions in the tropics. Since 1910 (the official establishment of KIT), many library collections from private and public organisations have been handed over to KIT. As a consequence, KIT now has vast holdings of over one million documents, including 250,000 books, 19,000 journals and 25,000 maps, all pertaining to the South. In addition, KIT holds some 200,000 documents in the form of brochures, unique manuscripts, newspapers and monographs, which belong to the Dutch cultural heritage.

13 Smart tools for evaluating the performance of information products and services

Ever since its existence, the Library was equipped to serve primarily users in the Netherlands. However, this rich history is no longer the only reason for the Ministry to subsidize ILS. Our input-based subsidy has been terminated and replaced by output funding. We have therefore had to redefine the Library’s mission and objectives. And each of our objectives has had to be subdivided into specific products – and, of course, all products had to be monitored in terms of quantity and quality. Funds will only be made available if the quantitative and qualitative targets are met.

For the purposes of this workshop, I will focus briefly on two of the five objectives of KIT- ILS. They are:

• To supply information to specific target audiences both in the North and South Based on the information sources available in Amsterdam, products, such as the TROPAG1 database, newsletters and the publication Gender, development and society are generated and produced in order to provide free access to the Library’s vast resources. Traditional question and answer services and document delivery services are also provided. The costs involved in providing these services are covered in the contract between the Ministry and KIT, with the exception of the publications for which a subscription fee is still required.

In addition, the Library has to identify partner organisations in the South in order to support an information flow between KIT and the partners overseas, also comprising the above services, which are funded by the Ministry. This change in policy consequently means that the continuation of these services depends largely on the use in the South, not in the Netherlands alone.

The strategy to implement this policy can therefore only be to cooperate with institutional partners in the South.

This is not only an open invitation to benefit from this new government policy; this is also a cry for support! For absence of demand from the South in the end will mean a reduction or even discontinuation of the funding of these services.

• Capacity building In addition to its information supply activities, the Library has to contribute to the establishment of information services strengthening organisations involved in development. The main focus of this contribution is identification and formulation of information projects together with partners in the South. Ultimately, financially viable proposals are submitted to donor agencies for funding of the formulated projects. The total number of projects has been fixed at 8 per year.

Also here, the only strategy to implement this policy successfully can be to cooperate with institutional partners in the South.

1 Database containing full bibliographic references with abstracts, covering literature on the cultivation of food crops and industrial crops, animal husbandry, forage and pastures, aquaculture, forestry, agroforestry, postharvest operations, farming systems and environmental management in tropical and subtropical regions.

14 Smart tools for evaluating the performance of information products and services

Again, as for the first objective, this objective and its products cannot succeed without the cooperation of partner organisations – our future is therefore partly in your hands.

Of course, the Dutch government has not given us carte blanche in providing products and services subsidized by the Ministry. During the first four-year cycle, we are expected to develop a methodology to implement the objectives successfully. It was therefore agreed to organise a workshop with our international partners to discuss the tools, which can be used to evaluate the performance of information projects and services. And this is why we have gathered here in Amsterdam today – to discuss and develop these tools.

Given the above, you can now appreciate how serious I am in expressing my hope and expectation that this workshop will be successful in fulfilling its challenging task. It will ultimately contribute to bridging the information gap between North and South, benefit agricultural and rural development and, let us face it, keep us (and maybe you) in business.

15 Smart tools for evaluating the performance of information products and services

Welcome

Sarah Cummings KIT

Following the kind words of Professor Akande and the presentation by my head of Department, Hans van Hartevelt, I too would like to welcome you all to KIT for this workshop, ‘Smart tools for evaluating the performance of information products and services.’ In this short presentation, I am going to give a brief introduction to the workshop and also to the LEAP IMPACT community, but first I’m going to welcome a few persons in particular.

First of all, I would like to welcome all of you who were at the Technical Consultation in Bonn – it is great to see so many familiar faces here. A lot of you have put in a tremendous amount of work into the tools – you can all see the draft tools if you look in the workshop file. I think that this says something of the power of our ‘community’ of practitioners, focusing on the evaluation of information.

Secondly, I would like to welcome particularly all of you who have travelled from far to get here. I really appreciate how much you have been willing to go the extra mile to get here. In fact, the ‘extra mile’ is a gross understatement – I should say the extra 1,100 kilometres i.e. 550 km there and 550 km back to get a visa.

Thirdly, I would like to welcome all the new faces. I am not going to put them in the position where they have to stand up and wave their hands, but I will say their names and we will be able to talk to them later. They are:

• Maartje op de Coul who has just taken up the evaluation position at Oneworld. • Boubacar Diaw from the Institut d' économie rurale in Mali. • Ivan Kulis from the European Centre for Development Policy Management Centre ECDPM. • Simon Osei from the Ghana Agricultural Information Service (GAINS). • Joel Sam from GAINS in Ghana. • Alec Singh from the African, Caribbean and Pacific (ACP) Secretariat. • Daniel Thieba from Grefco in Burkina Faso.

In addition to this, there are a number of other new persons in the group who are important to the success of the workshop – and I would like these to stand up or wave a hand:

• Chris Addison is the moderator for the first two days of the workshop. He is an independent consultant with a wide experience of evaluation and moderation. He is also

16 Smart tools for evaluating the performance of information products and services

convenor of the European Association of Development Research and Training Institutes InfoMan working group. • Secondly, I would like to introduce Lola Visser-Mabogunje of CTA. To me and to many of you she does not feel like a new face because she has done so much to organise this meeting, but as she is new to this LEAP IMPACT meeting I will introduce her anyway. • Thirdly, I’d like to introduce you to John Belt. John Belt works in the Expertise Department of KIT, and has extensive experience of writing workshops – we are hoping that he will help keep us on course over the next few days. Finally, I would like to introduce you to my colleagues Anne Hardon and Winnie van der Kroon. You will all know Anne Hardon from your correspondence – she has been extremely busy organising flights, hotels rooms and even locations here at KIT (which has not always been straightforward given that we are going to be visiting a wide range of different rooms during the course of the workshop). She is an information manager and is planning to join us during most of the workshop. In addition, Winnie van der Kroon has put a lot of time into logistics – and she and Anne Hardon are going to be backstopping and troubleshooting during this week. Indeed, if you have any concerns or problems, questions or worries, any of us from KIT – Anne Hardon, Winnie van der Kroon, Margo Kooijman , and myself will do our best to sort these out and try to make sure you have a pleasant stay at KIT.

I am now going to introduce the workshop in general terms. As Professor Akande has already reminded us, almost a year ago we had the CTA-led Technical Consultation in Bonn on Assessing the performance and impact of agricultural information products and services. At that time, we agreed that further work needed to be done on ‘smart practices’ (as opposed to best practices) which would assist information practitioners and managers in their efforts to evaluate their products and services. Ms Kooijman’s interpretation of this was that we should write ‘tools’ which could be used for this purpose and this is the vision that we are following here with the validation and writing of smart tools. The need for these tools was based on the perceived needs of information practitioners in isolated or resource poor situations, being thrown in at the ‘deep end’ and having to evaluate his or her information programme. Even in situations where there is a wide amount of information available on evaluation, some practical easy-to-use tools and approaches will be of valuable assistance to information managers. In this sense, we are both the writers and potential users of the tools, sharing our knowledge to improve evaluation practices.

This workshop has eight main objectives and I would like to emphasize these:

• Clarification of key concepts and terminologies used in the management and performance of agricultural information products and services. • Validation of the draft toolkits. • To develop a strategy for the future development and refinement of these tools. • To select teams to develop and refine additional tools within a timetable. • Formation of a small Editorial Committee.

17 Smart tools for evaluating the performance of information products and services

• To promote the LEAP IMPACT approach. • To reach some conclusions about the future directions of LEAP IMPACT. • To provide a forum for discussion of CTA’s Impact resource book. The workshop is taking place within the context of the LEAP IMPACT community, which has a workspace on Bellanet. It was started in May 2001 by a group of individuals with institutional affiliations interested in the possibility of a Website on the evaluation (or rather impact assessment) of information. When we mentioned this to Peter Ballantyne of IICD, he informed us that Bellanet already had such a site called LEAP: the Learning and Evaluation Action Programme. This led to the formation of the IMPACT community which is both a community of practice (a group of professionals sharing a common interest) and in some senses a more or less strategic partnership between the core institutions. Since the formation of LEAP IMPACT 18 months ago, we have had a lot of discussion – two e- conferences and this second workshop or technical consultation. The workspace has been very quiet of late – partly because many of us have been working hard on producing tools – and partly because of the tension between active cooperation online and offline. In the coming months, I am hoping that the workspace will be the place to distribute and test these tools with a wider audience. In any event, we will need to consider the future of LEAP IMPACT in the Open Session on Thursday morning.

As you can see from the programme, Karen Batjes will be going into more detail about what we have done with the tools to date and Lisette Gast will be describing more fully the programme in the coming four days.

That is all for now. I will hand back to Professor Akande.

18

Keynote addresses

Smart tools for evaluating the performance of information products and services

Keynote address

Jan Donner President, KIT

Ladies and Gentlemen,

Today marks the third day of a workshop on Smart tools for evaluating the performance of information products and services: a validation and writing workshop. However, today, your workshop is going public, aiming to interest a broader public. You have elected to take this opportunity to give each of our three institutions the floor – KIT, CTA and IICD will consecutively address you; and since even this session is being chaired, there is an implication that we may be cut short. I must be therefore be prudent and use my time well.

KIT is happy to host this event. It is a knowledge institute, and thus highly values the availability and accessibility of information. The Institute began over 90 years ago as a knowledge provider. The mission of the Institute was to prepare outgoing employees of Dutch international firms with adequate information to survive in faraway places. This information was garnered through:

• Study at our libraries. • Training at our training centre. • Observation in our museum.

At the same time, we also wanted to provide those family members staying behind with an image of the conditions that their next of kin would encounter overseas – so that they could understand their letters, their stories and their hardships. Those leaving and those staying behind could therefore gain both an understanding and an appreciation of:

• Foreign cultures • Foreign languages • Foreign climates • Foreign diseases • The impact of all those conditions on day-to-day life

Gradually, the Institute has added to its facilities to better perform its function. A theatre has been added so that productions from other cultures can be staged. In fact, over 180 productions have been put on in this auditorium and in the large auditorium annually. We now have our own KIT-publisher to produce relevant books to the tune of almost 100

21 Smart tools for evaluating the performance of information products and services

publications a year. Our biomedical research has been expanded to gain a better understanding of tropical diseases, their diagnostics and their prevention. We have also started to support developing countries through a great many projects in the fields of:

• Agriculture • Health • Gender • Sustainable development

Over the last decades, we have learned to prepare our fellow-countrymen for life abroad as well as prepare foreign nationals to live amongst the Dutch – how to cope with our peculiar habits and traditions. The Institute has many facets, and aims to provide comprehensive services and collaboration across the globe and in The Netherlands.

Information is our lifeline. The Institute can ill afford to lose its credibility by:

• Having employees of multinationals returning from foreign shores stating that they have been ill-prepared for their sojourn abroad and that local circumstances were different from what they had been lead to believe by KIT. • Having colleagues in other countries tell us that the information they received from KIT was outdated and not adequate for the tasks at hand. • Having our colleagues return from missions around the globe stating that our knowledge and experience was insufficient to address the problems that confronted them on site. • Having our staff report that the results of its research is nothing new and can be found in publications for years already. • Having our foreign partners tell us that they can get better and more up-to-date information and documentation from others.

Good, adequate and up-to-date information is the key to our success. We must, therefore, ensure that the information available to us at KIT is:

• Comprehensive within the stated domain • Up-to-date • Adequate • Relevant • Accessible

Quality assurance, then, is the issue that we must focus on. It is the theme of this meeting, and a topic dear to me. I would therefore like to take this opportunity to briefly address three issues.

22 Smart tools for evaluating the performance of information products and services

• Culture: the decision by any organisation or person to address the quality issue has far reaching consequences. Such a decision implies that the organisation is prepared to be self-critical and is ready to act upon criticism received from others. The first step in any process to review the quality of your organisation and the way that it functions is to raise questions like:

- How are we doing?

- Can we do better?

- Are we satisfying our clients? Usually it will take the employees of an organisation considerable time to accept that critical reviews can be used to the advantage of the organisation – to accept that one of the consequences of a quality assurance programme is that criticism must be acted upon and is not meant to start a discussion about the merits of criticism received. Organisations tend to be complacent, sure of themselves and convinced that they are providing top-rate services even when this is not the case. It is, therefore, not easy for management to engineer a change of culture – but to do so is a precondition to the successful introduction of any quality assurance programme. • Process quality: our Information and Library Services department is a professional service, providing library and documentation services. It has a professional staff competent to provide service at the level required in the mission. Professional aspects relate to:

- Which publications to acquire or not to acquire

- How to register new publications in our collection

- How to make our collection accessible to clients and consumers

- How to provide adequate, correct and usable information about our acquisitions Essentially we are talking about the professional attitudes and approaches that each of us has been taught at some time or other. We should, therefore, be regularly checked to ensure that we maintain our professional standards and that we apply the things that we have been taught correctly. • Overall quality assurance: in some instances we tend to confuse customer satisfaction with quality assurance. We have to be aware that many clients are frequently ignorant about the quality of professional services. If we provide the client with a rather pleasant rendition of what we plan to do and achieve, we may find the client much more satisfied than if he has been served by thorough, but inaccessible top-rate professionals. However, a client may not know what he is getting into, and is thus is a poor judge of the highs and lows of standards that he is confronted with. What we really need to know is how our peers judge our performance. Can we really feel that we are at the top of our profession in both the application of our professional standards as well in our professional creativity? What suggestions do our peers have with respect to our performance?

23 Smart tools for evaluating the performance of information products and services

Can we expect this workshop to come up with a toolkit that will actually provide us with satisfactory answers to all of these questions? There is no doubt that we need such a toolkit. Peer reviews cannot be resolved effectively by any organisation alone. Together with other organisations, we need to firmly establish standards and develop codes of good practice. This does not mean that we should contemplate a ranking of information providers. The issue is not whether KIT is doing better or worse than CTA or IICD; the issue is whether each of our organisations is accomplishing its own goals and objectives as stated in its mission statement.

To that end, we are eagerly interested in the development of joint quality assurance strategies and possible tools. That issue is separate from the issue of impact measurement and should be kept separate. I do agree that we have some responsibility that the services of information providers should be used in ways which are appropriate. It does help if impact can be registered, but we – on the other hand – are all too well aware of potential mismatches between our services and the needs to be met. Matching services and needs is our responsibility in part only.

I do hope that your workshop convened today will generate tangible results that will benefit each of our organisations and the participants. Quality assurance will be on the agenda for a long time to come, and that issue will be a decisive factor for the continuity and strategy of each and every service provider.

Thank you for your attention.

24 Smart tools for evaluating the performance of information products and services

Keynote address

Carl B. Greenidge Director, CTA

Dr Jan Donner (President, KIT), Distinguished Guests, Colleagues,

If it is true that a camel is a horse designed by a committee, then the CTA is obviously the offspring of the same parents. When the ACP and EU, the French and English speakers in particular, met to decide on the name of this body to be responsible for facilitating communication among and the transmission of information to ACP states they had difficulty arriving at a consensus. Many names were proposed – Centre for Agricultural Development, Technical Centre for Agriculture, Institute for Rural Cooperation, and the like. In fact, the only aspect which could command common agreement among the negotiators was the initials; so they apparently decided to take out their frustration on the rest of us by settling the initials first and then agreeing on a name which was compromise of all the proposals. The concept of technical (as opposed to policy) was contributed by the French, whilst cooperation rather than development was chosen to “frustrate” the ACP. The contraction of Technical Centre for Agricultural and Rural Cooperation to TCARC, is very inelegant in English and CTCAR in French evokes images of the railways. CTA is not the contraction in any of the official languages of the EU or of the ACP but that is what we have and that is why so few people either know what it stands for, or what the CTA does!!!

Actually, the Centre's responsibilities are set out in the ACP-EU1 treaties beginning with Lomé II. The two parties subsequently revised the treaty on which the cooperation is based and which spawned the CTA, which we call ‘the Centre’. Initially, under Lomé II and subsequent Lomé Conventions (up to IVbis), the Centre was charged with the provision of information on agriculture to ACP organisations and states and the facilitation of exchanges of information among them. A good deal of the energy and priority in the Centre was devoted to reacting to demand for services. Under the Framework Agreement (Cotonou) signed on June 23rd 2000 (Annex III, Article 3), however, the Centre has been directed to pursue a number of new initiatives. This new mission has been cast not so much in the mode of demand-led initiatives but in a more proactive mould. Additionally, new areas of emphasis are:

• Information communication management • Policy formulation and implementation

1 Now consisting of 77 African Caribbean and Pacific States. Cuba, the most recent member of the ACP (bringing the number to 78), is not a signatory to the Agreement.

25 Smart tools for evaluating the performance of information products and services

• Impact and impact assessment • Socio-political issues such as poverty alleviation, food security, for example

That is the background to the Centre, its genesis and its mission.

Many entities are involved in one or other of these areas but our aim is to build on the existing strength of the Centre or what we may term its niche. The niche for which the Centre has built and established a reputation includes being:

• An honest broker between professionals and institutions with an interest in ACP agricultural development – ACP, EU, as well as third party stakeholders/partners • In a position to provide a range of fora or platforms for dialogue • Managing an extensive contacts database • Well experienced in information and communication management (ICM) capacity development • Able to deliver reasonably small and manageable packages of assistance to partners

The specific programmes of the Centre are as presented in the following subsections.

Provision of information products and services

The publication and dissemination of information on demand was one of the first and primary functions carried out by the Centre. In general terms, the information is directed to the management of information communications. The subject matter of this information is determined by four priority themes, which over the last five years have been:

• Conquering markets • Intensification and optimisation of agricultural production • Natural resource management and, • Mobilising Civil Society

The material published, co-published and distributed takes a variety of forms including:

• Working Document Series (a new series started in 2000) • Spore/Esporo – a serial publication covering topical issues and CTA activities. It also acts as a tool for the distribution of other CTA material • Technical Publications – which documents work carried out within the Centre • A Seminar and Co-Seminar Series – published by the CTA on its own or with collaborating partners, respectively

26 Smart tools for evaluating the performance of information products and services

Of late, among the most popular of the 600 plus publications which the CTA makes available, have been books on Poultry (in both English and French) and one called 'Where there is no vet'.

Under the new mandate management has been seeking to raise awareness of critical policy issues and to reach a wider audience by means of more extensive utilisation of electronic information services in conjunction with conventional means. The products of the Centre are therefore increasingly being disseminated in the electronic format as well as the more traditional print format with which you may be familiar. Some items, including Spore, are now available via the Internet. A search engine is to be added to the electronic Spore in order to enable more extensive use of the service. Additionally, the use of satellites for digital distribution of Spore to enable us to access a wider audience is being considered.

The Centre has also been providing support to information services of ACP states.

This activity includes:

• Provision of CD-ROMs • Supply of reference books and CD-ROMs via the Selective Dissemination of Information (SDI) and the Dissemination of Reference Books on Agriculture (DORA) • A Question-and-Answer Service (QAS) initially provided centrally but now in the process of being decentralised to the ACP regions. The latter process started with pilot services in those states with the best basic infrastructure on which such a service might be built to provide the widest coverage to the appropriate region. This matches a switch from the SDI to CTA partners with CD-ROM sites. There are at the moment 62 of the latter. The decentralised QAS (with CD-ROM services) is being linked with the DORA programme: - The QAS decentralisation began with Programme for Agricultural Information Services (PRAIS) based in South Africa

- In West Africa there are two initiatives. A Nigeria QAS was launched in July 2000 in collaboration with local agricultural research institutions and Ahmadu Bello University. This initiative followed that of the Ghana Agricultural Information Service (GAINS), which was launched in February 2000. The needs of Francophone West Africa are being addressed • The provision of Rural Radio Resource Packs four times annually to some 150 radio stations, at the last count (2000), in the public and private sector

The Information Products Department is responsible for providing this programme. Its main charge is to raise awareness of the issues in the sector and to disseminate appropriate material to and on the ACP agricultural sector.

27 Smart tools for evaluating the performance of information products and services

Facilitating and stimulating information flows

The Centre is also charged with promoting the exchange of information and it executes this responsibility via exchanges of documents and experiences between ACP experts and partners through the instrumentality of:

• Seminars (co-seminars) and study visits; • Supporting ACP attendance at seminars and conferences; • Electronic services – including the CTA portals such as the ‘ICT Update’ and ‘Agritrade’.

The exchanges are focussed on those subject matters which the Centre terms priority themes.

Roughly 200 ACP experts have participated in these activities – seminars co-seminars and study visits - annually. At the last count 180 nationals attended, with CTA’s assistance, 47 national, regional and international conferences organised by other institutions.

Information networking support is also provided for regional entities. The objective of the support is to contribute to sound policy analysis and decision-making in both the public and independent sectors of the ACP agricultural systems. The output of this programme consists of:

• Support to regional information and policy networks.

In keeping with this strategy and in order to strengthen ICM capacity in agricultural policy analysis, the Centre provided support to:

• Conference of Ministers of Agriculture/ West Africa and Central Africa – CMA/AOC • Eastern and Central Africa Programme for Agricultural Policy Analysis – ECAPAPA • Economic Community of West African States – ECOWAS • Réseau d' expertise en politiques agricoles – REPA • Southern African Development Community – SADC • Union économique et monétaire ouest-africaine – UEMOA

At the regional level the Centre routinely maintains effective partnerships with:

• Association for Strengthening Agricultural Research in Eastern and Central Africa – ASARECA • Caribbean Agricultural Research and Development Institute – CARDI • CMA/AOC • Institute for Research, Extension and Training in Agriculture – IRETA

28 Smart tools for evaluating the performance of information products and services

• Southern Africa Centre for Cooperation in Agricultural and Natural Resources Research and Training (SACCAR)

In pursuit of the development of market information systems there have been two regional initiatives involving:

• CMA\AOC • CARDI

ICT Policies and the CTA Observatory The Centre is fortuitous in having a body of ACP and EU ICT experts, the ICT Observatory, which advises the Director on developments in ICT and their implications for the ACP. The observatory recommends priorities and strategies to be pursued by the Centre. The Annual meeting of the Observatory for the year 2000 recommended that there should be greater use of ICTs in CTA services and that the Centre should promote ICTs that can be used on the weakest bandwidths.

In December 2000, the Centre released the first issue of 'ICT UPDATE', a bi-monthly bulletin of information on ICTs and their application in agricultural and rural development. It is available primarily in electronic form on the ‘agricta’ website but printed copies are also available.

AGRITRADE (“WWW.agricta.org/agritrade/”) The Centre in response to the new mission and in response to growing calls for the ACP Group to equip itself to deal with the challenges posed by the need to:

• Negotiate (collectively and or individually) with the EU alternative trade arrangements which do not depend as heavily, as was the case with Lomé, on preferences • Implement World Trade Organisation (WTO) commitments • Participate in the next round of WTO negotiations and for the improvement of the implementation of the Marrakech commitments of the Uruguay Round • Follow carefully the reforms of the Common Agricultural Policy (CAP) and their consequences • Monitor the EU-South Africa trade agreement • Conclude trade agreements within regional groupings ranging from the FTAA to possibly a regional common market in the Pacific

The major weaknesses of the Group include:

• Limited ability to follow all the relevant developments and to discern their implications for the ACP-EU negotiations or, of the latter for the WTO discussions • Poor information gathering and management, including information sharing

29 Smart tools for evaluating the performance of information products and services

• Inadequate debate and information exchange at the national and regional, but especially at the All-ACP levels • Failure to design its own ‘space’ as an affected and interested party • The difficulty in arriving at common positions in a timely fashion, if at all

The Centre has therefore sought, in the context of the agricultural matters, to:

• Create or heighten awareness of the critical issues and deadlines and the fora in which they are being discussed; • Provide information on developments and policies pertinent to the ACP-EU negotiations and the related policies; • Facilitate the exchange of this and similar information as well as for the analysis and exchange of views on these developments; • To create a platform for the exchange of ideas and the transmission of ideas to those parties in the ACP states who have an interest in the negotiation of the agreements and in the outcome of those negotiations.

The portal is organised as follows:

• A current news bulletin – providing highlights of current developments relevant to existing and future ACP-EU trade arrangements with a brief commentary on the significance of the news • A guide to the electronic, internet and printed sources of information on major issues with short reviews of important material and websites • A signpost to discussion fora (including NGO, academic) and their conclusions regarding matters of interest • Internet-based and electronic discussion platforms intended to facilitate intra-ACP discussion of these matters • Briefs on a variety of matters. These analyses may range from overviews to commentaries on proposals currently on the table. With the assistance of e-mail, readers are able to download resource material related to this service. Specific areas covered are: - Market access

- WTO agreement on agriculture

- CAP reforms – by sector: including dairy, cereals, fruit & vegetables and poultry

- EU positions in the WTO

- EU initiatives and related matters – such as ‘Everything But Arms’(EBA), WTO Phytosanitary Regulations (SPS) and administrative arrangements

- The ACP-EU trade negotiations – agricultural aspects

- The commodity protocols – beef and veal, sugar, bananas – as well as cocoa, coffee and rice

30 Smart tools for evaluating the performance of information products and services

• A quarterly bulletin on agricultural trade issues in general not restricted to negotiations

The Communications Channels and Services Department is responsible for managing this programme. It is specifically charged with facilitating the development of networking, information flows, exchanges and contacts among ACP actors. In pursuit of the modified mandate they pay special attention to facilitating the integrated use of traditional and modern means of ICT.

Strengthening information and communications management and supporting the formulation of information strategies

The Centre’s third area of work is the strengthening of information and communication management capacity. In this regard the Centre’s main activities are to develop the capacities of ACP partners by the provision of training of partners and information networking support:

Capacity development The capacity development programme takes the form of:

• Training of partners in a variety of skills relevant to the management of IC: - Management of databases

- Public speaking

- Production of annual reports

- Management of QASs

- Scientific editing

- Rural radio production

- Web page design Over 300 networks benefit from these efforts each year.

• Information management policies The second set of services provided under this programme involves the development of strategies for information management. The objective is to contribute to sound policy analysis and decision-making in both the public and independent sectors of the ACP NAS and the output of this part of the programme consists of:

- Development of partnerships with independent sector organisations

- The development of information management policies

As was the case at the regional level, the Centre has provided support to ACP states to strengthen ICM policy analysis and in this regard has also undertaken three national studies to develop ICM for formulating and implementing sustainable agricultural systems and natural resource management.

31 Smart tools for evaluating the performance of information products and services

Additionally, in collaboration with a number of private and public sector organisations the Centre has been promoting the establishment of effective market information systems (MIS) at national and local (and regional as already mentioned) levels. The philosophy underlying the initiative, based on a careful study of farming systems, is that 'an effective MIS should be localised, demand driven, community or sector-specific, supported at local and national levels and established and run on a participatory basis'. In pursuit of this policy, national pilot projects are being supported in:

- Ghana

- Kenya and

- Uganda

In pursuit of effective national partnerships the Centre has been looking at the effectiveness of CTA interventions and means of further reinforcing the ICM capacities of the NAS of ACP states. A number of pilot projects have been established. In 2000 they were located in:

- Burkina Faso

- Ethiopia

- Uganda

- Madagascar

- Jamaica

- Cameroon

- South Africa (2)

- Mali • Science and Technology Strategies The Centre has been involved in working as a member of EU and ACP informal bodies aiming to mobilise (ACP and EU) support and funding for demand-led research as well as implementable national and regional science and technology policies which involve dialogue with, and contributions from all the relevant categories of stakeholders namely, the research community, users (consumers and producers) as well as policy-makers.

In pursuit of efforts to enhance our understanding of communication, extension and innovation systems in ACP states in 2000 the Centre commissioned a study to:

- Throw some light on the level of utilization by different actors of the Internet

- Analyse the constraints on access to available information

- Identify the main opportunities offered It is also in the process of contracting a consultant to examine the incidence of science and technology policies in ACP states, the extent to which the policies take account of the

32 Smart tools for evaluating the performance of information products and services

agricultural sectors and the role of representatives in the process of science and technology policy formulation.

The results and recommendations will be taken into account in the Department’s future programmes.

Impact assessment and evaluation

Impact assessment is especially relevant to this week’s exercise. We have been charged with ensuring that programmes implemented or funded by the Centre have the intended consequences as regards facilitating poverty alleviation. This is a process which requires ex- post assessment as well as constant monitoring and periodic adjustment of programmes. To be effective, it also requires that we take it into account when planning programmes. Impact assessment is especially problematic when applied to information services.

The value of appropriate tools ‘simple’ methodologies cannot be exaggerated therefore, in this context.

Our work in this area has been longstanding with CORAF in Senegal and NAMDEVCO in Trinidad and Tobago and we have undertaken studies on the impact of agricultural research in West Africa and Trinidad and Tobago, respectively.

In future there will be more annual evaluations. Additionally, those evaluations will be more participatory and will be undertaken both internally and with the assistance of external consultants.

The Centre’s work on impact assessment is being undertaken by a newly established service in the Centre, namely Planning and Corporate Services (P&CS). P&CS has other responsibilities arising from the need to extensively and systematically integrate the work of the different Departments of the Centre. In pursuit of that mission, it has been charged with developing the methodologies and approaches appropriate for the incorporation of the broader socio-economic and political goals of Cotonou into the Centre’s programmes. These include the cross-cutting issues such as gender and social capital as well as food security and the elimination of poverty.

Conclusion

I wish to express a word of thanks to all those who have been involved in organising this important event. The Centre is especially pleased at the range and quality of participants. We look forward to the fruitful conclusion to what is proving to be a very successful conference.

I thank you.

33

Part 1

Toolkit papers

Smart tools for evaluating the performance of information products and services

1.1

An evaluation framework for evaluating the performance and impact of information projects, products and services

Ibrahim Khadar CTA

Introduction

An evaluation framework can be defined as 'a description of the overall plan, the context and the underlying philosophy for the different evaluation activities'1. It can be defined, in a narrower sense, as 'a well-defined group of questions on which the structure of enquiry is based. They form the specifications that map the route the evaluation must follow'.2 Both definitions underline the importance of an evaluation framework in mounting evaluation exercises. The concept of evaluation frameworks has played an important role in research efforts aimed at developing a suitable methodology for evaluating information programmes.

This paper draws on the results of an International Development Research Council (IDRC)- funded research project and other sources to highlight some of the attempts that have been made at developing evaluation frameworks aimed specifically at information programmes. It also indicates a number of difficulties associated with the development and application of these evaluation frameworks. An 'evaluation road map' (Performance-impact evaluation of information programmes (PIP) now commonly referred to as Evaluation road map for information model (ERMi3)) is presented, which provides an alternative and flexible approach for planning evaluation exercises

1 Los Alamos National laboratory--- http://set.lanl.gov/programs/evaluatio/Framework/Framework.htm 2 NP HIS evaluation (anon?) 3 I designed the ERMi model in September 2002 in response to recommendations made at the Bonn workshop. It therefore constitutes work in progress.

37 Smart tools for evaluating the performance of information products and services

Five inter-agency consultations on evaluation (1998–2002) The five workshops are referred to in this paper:

• The Wageningen workshop: Assessing the impact of information and communication management on institutional performance (Wageningen, The Netherlands, 27–29 January 1998) • The Entebbe workshop: Impact assessment of agricultural research in Eastern and Central Africa (Entebbe, Uganda, 16–19 November 1999) • The London workshop: The impact of information on decision-making (London, UK, 5–6 December 1999) • The Bonn workshop: Assessing the performance and impact of agricultural information products and services (Bonn, Germany, 9-12 October 2001) • The Amsterdam workshop: Smart practices for evaluating the performance of information products (Amsterdam, The Netherlands, 19-22 November 2002). The Wageningen, London, Bonn and Amsterdam workshops dealt mainly with issues relating to the evaluation of information, while the Entebbe workshop dealt with agricultural research. All these workshops were organised through the collaborative effort of various developmental organisations (donor agencies, international and regional bodies, and developing country national organisations). Collaboration usually involved cost sharing, planning and coordination of the workshops, and various written and oral contributions to the workshop deliberations.

Examples of evaluation frameworks for information programmes

IDRC can be credited with initiating the first substantive research project aimed at developing a suitable methodology for assessing the impact of information. This pioneering research project, which was carried out from 1992–2000, revolved around the testing of the Preliminary Framework Methodology (also known as the PF methodology) in seven countries in Africa, the Caribbean, Latin America and Asia. A full account of the PF methodology has been published by IDRC ((Menou 1993) and the results of the research project have been published by the International Federation for Information and Documentation, FID (McConnell 1999; Horton Jr. 2000).

According to the FID report, 'the PF methodology attempts to measure the part played by information (among other factors) in decision-making through a combination of qualitative and quantitative approaches to monitoring of inputs, outputs, and outcomes'. The PF methodology identifies four distinct stages in the evaluation process:

• Preparatory steps: including describing the 'information use environment' and identifying the target audience.

38 Smart tools for evaluating the performance of information products and services

• Planning and design: including identifying the primary objectives of the evaluation and defining indicators.

• Monitoring and measuring: involving data gathering on inputs, outputs, benefits, costs and indicators.

• Communicating the results: aimed at providing the evaluation feedback to the target audience.

The following indicators4 provide the backbone to the PF methodology:

• Performance indicators relating inputs to outputs

• Effectiveness indicators relating outputs to usage

• Cost-effectiveness indicators relating inputs to outcomes

• Impact indicators relating usage to outcomes.

The report of this research project, discussed at the London workshop and published by FID in 2000, (Horton Jr., op. cit) revealed a number of problems with the application of the PF Methodology during the case studies. These problems, which largely contributed to the inconclusive nature of the findings, included:

• Failure of the case studies to adhere to the PF methodology (tracking fewer variables, indicators and channels, reducing the monitoring period, etc.)

• Ambiguity over the primary object being assessed, and whether the perceived benefits truly were indications of impact, or merely user satisfaction

Although the project failed to deliver its expected output, it played an important role in raising awareness about impact assessment and in managing to convince organisations and individuals involved in the information and communication management of the need to take a stronger interest in evaluation. As a result of this encouragement and buoyed by developments in ICTs, the search for a suitable evaluation methodology has continued leading to a number of frameworks being proposed in the literature. Examples include:

• A conceptual framework for the study of the impacts of the Internet (Daly, J.): [This framework is based on the indicators – 'Penetration', 'Use', and 'Impact.] • ICTs and Development: Testing a Framework for evaluation (Young, V.): [This framework is based on the indicators – 'Information', 'Borderless connection', 'Timeliness', 'Improving costs and benefits'.]

4 The PF methodology considers indicators as identifiable quantitative and qualitative measures

39 Smart tools for evaluating the performance of information products and services

• ICTs life-cycle (Baark and Heeks, 1998): [This framework is built around the critical benchmark events -- choice of technology, purchase and installation, assimilation and use, adaptation, diffusion and innovation.]

Limitations of evaluation frameworks

Despite the efforts various researchers and evaluation specialists have made to develop the ideal framework for evaluating information programmes, the available options still poses a number of weaknesses, including, the use of potentially confusing terminology, lack of flexibility, and incompleteness of the framework design. These limitations, which may be interrelated, are discussed below:

• The practice of designing evaluation frameworks essentially around groups of indicators, without sufficiently clarifying the link between the types of evaluation and the corresponding indicators, can be very confusing. . For example, in the case of the PF Methodology, the terms 'performance', 'effectiveness', 'cost-effectiveness' and 'impact' are employed as groups of indicators, whereas the entire framework is meant to be used to evaluate impact. This gives the impression that evaluation and impact assessment constitute the same exercise, thus making the role of impact indicators in the exercise unclear. • Evaluation frameworks currently in use in the information field are quite rigid, because in order to apply the framework fully, each time an evaluation exercise is undertaken, all the groups of indicators (e.g. efficiency, effectiveness, relevance, sustainability, cost- effectiveness, etc.) have to be investigated. This often makes the scope of the evaluation too broad and unmanageable. On the other hand, any deviation from the standard model, as seen in the case of the PF methodology, may be considered 'unacceptable'. • The diagrammatical presentation of the evaluation frameworks tends to feature mainly the indicators, thus leaving out key strategic elements or planning concerns, such as the type of evaluation envisaged, scope and focus of the evaluation, methods and the involvement of interest groups in the evaluation exercise. Although these issues may be adequately covered in the overall evaluation plan or 'terms of reference', being able to capture all of them in the same diagram will provide a more complete picture of strategic options available in pursuing the evaluation exercise.

An alternative approach to planning evaluation exercises

The Bonn workshop revisited the question of evaluation frameworks and as part of the preparation for the workshop an inventory evaluation of frameworks was carried out, dealing specifically with information programmes (Cummings, S, 2001). After discussing this issue at length, the workshop participants came to the following conclusion:

The content of evaluation frameworks is rich, with different frameworks suited to the analysis of different conditions. The frameworks should therefore serve as road maps to help practitioners

40 Smart tools for evaluating the performance of information products and services

define hierarchies of objectives (e.g., goals, purposes, outputs, activities, etc.) all in the context of time and space.

In attempting to translate this conclusion from the Bonn workshop into a request for further research on evaluation frameworks, I realised that what was needed was not an alternative or a better evaluation framework, but rather a holistic and more comprehensive network of evaluation concepts that can provide the basis for developing specific evaluation frameworks to match different requirements. This holistic network of concepts can be viewed as a map, while specific frameworks derived from it would constitute routes to be followed for given evaluation exercises. The result of this reflection is presented in Figure 1 entitled "Evaluation Road Map (ERMi): a conceptual guide for performance evaluation and impact evaluation (focus on information programmes).

The development of the ERMi has been facilitated by ideas generated from the Wageningen, Entebbe, London and Bonn workshops. The ERMi design has been refined and adapted significantly on the basis of comments and suggestions made by participants at the Amsterdam workshop5.

ERMi has several important characteristics:

• It is a detailed map that provides the key elements (or sign posts) around which the evaluation frameworks (or routes) can be derived • It contributes to making evaluation frameworks more realistic by highlighting the options (i.e. choices and limitations) to be taken into account when designing an evaluation exercise • It treats performance and impact evaluation as separate, albeit interrelated, types of evaluation • It recognises the complex, non-linear relationships between evaluation indicators • It is a comprehensive multi-dimensional roadmap – a vertical dimension reflecting five elements (or sign pots) that are interrelated via a horizontal dimension that allows movements back and forth on the map. In order to facilitate the presentation of the ERMi concept, the five elements are reported below as they appear in Figure 1 moving from left to right: - evaluation type: involves choosing between performance evaluation and impact evaluation or selecting elements from both types

- evaluation scope and focus: the evaluation exercise could focus on one or more of four areas of development intervention (i.e. process and management, products and services, individuals, organisations and networks, and society). Collectively, these areas of intervention constitute the entire scope of evaluation exercises.

5 Useful suggestions were also made on the ERMi design when I presented it at a workshop on "results-based evaluation" held in Maastricht (12-13 December 2002).

41

FIGURE 1: EVALUATION ROAD MAP (ERMi): Smart tools for evaluating the performance of information products and services A CONCEPTUAL GUIDE FOR PERFORMANCE EVALUATION AND IMPACT ASSESSMENT (FOCUS ON INFORMATION PROGRAMES) S Ib hi Kh d

EVALU-ATION EVALUATION PERFORMANCE- IMPACT EVALUATION INTERST TYPE SCOPE & METHODS FOCUS SPACE GROUPS

• Improved Economic impact SOCIETY Increased Improved INDIRECT Standard Assessment EVALUATION & Production OUTCOMES Methods PARTICIPANTS: DEVELOPMENT of living Improved Income [Organisation] Participatory IMPACT Evaluation ASSESS- Methods STAKEHOLDER GROUPS MENT & . Improved/ new Qualitative/ INDIVIDUALS, Improved skills DIRECT Anthropological ORGANISATIONS awareness . More informed Methods PARTNER and OUTCOMES ORGNS. NETWORKS Opportunities/ decision-making threats identified New strategies (Programme New initiatives Level) MANAGERS AND STAFF

OUTPUTS PERFOR- and Scientific/ Experimental MANCE Relevance Effectiveness EVALUATION PRODUCTS UTILISATION Evaluation Methods Demand Accessibility EXPERTS and e.g. Random Sampling; Hypothesis EVALU-ATION User satisfaction Timeliness [Project/ SERVICES Gender balance Accuracy Testing Gender balance Accuracy Service level] FEEDBACK AUDIENCE

ALL EVALUATION PROCESSES Needs Efficiency INPUTS Management PARTICPANTS and Planning Financial [Activity level] -Oriented Systems and MEDIA MANAGEMENT Implementation Sustainability Evaluation Methods e.g. Logframe Smart tools for evaluating the performance of information products and services

- performance-impact space: which depicts the generic types of indicators as well as the level of investment (expected to increase from projects to programmes to the organisation as a whole) that would correspond to the levels of intervention indicated in the "scope and focus" column.

- evaluation methods: aimed at ensuring that the chosen evaluation method is in line with the choices made regarding the type of evaluation, its scope and focus, and the area of the performance-impact space targeted by the evaluation.

- interest groups: relates to the possible involvement of various stakeholders and other actors in the design and implementation of the evaluation, as well as deciding which strategies to adopt regarding disseminating evaluation feedback to various audiences. • It is neither a method nor a framework per se.

It is important to note that the definition of the evaluation route is an iterative process involving movements in all possible directions (i.e. considering all the options available on the map) until an acceptable route1 has been identified. One of the main advantages of the ERMi model is that it provides a relatively simple visual representation of the key evaluation concepts in a single diagram, rather than through several fragmented descriptions or illustrations.

The ERMi: What next?

An important message embedded in the ERMi design is that the standardisation of evaluation concepts and terms would significantly widen the scope for further advancements in evaluation practice. Various concepts and related terms are employed as signposts in the ERMi, thus making sure that variations in meaning are reduced and the relationships between the various terms are clarified.

In the information field, the ERMi presents an opportunity for a more focused debate on evaluation practice. As the debate unfolds, compromises would be achieved and new ideas on evaluation would emerge. In the meantime, attempting to convert the conceptual ERMi into an operational tool for planning evaluations might be worth a try, especially in the context of collaborative work in which consultation and debate take centre stage.

References Bellamy, M. 2000. Approaches to impact evaluation (assessment) in agricultural information management; selective review of the issues, the relevant literature and some illustrative case studies (CTA Working Document Number 8021; 33pp.)

1 The definition of an "acceptable evaluation route" is not provided in this paper.

43 Smart tools for evaluating the performance of information products and services

Boissière, N. 2000. Assessing methodologies in studies of the impact of information: a synthesis. In: Defining and assessing the impact of information on development: building research and action agendas. Horton, F.W. Jr. (ed.). FID Occasional Paper 16. pp 49–64. International Federation for Information and Documentation, , the Netherlands.

Correa, A. et al. 1997. Rural information provision in developing countries – measuring performance and impact (prepared for United Nations Educational, Scientific and Cultural Organization (UNESCO) by IFLA; 118 pp.)

CTA, 2002. Assessing the performance and impact of agricultural information products and services. Summary report and recommendations of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001

CTA, 2002. Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001 (CTA Working Document Number 8027)

Cummings, S. 2002. Conceptual frameworks and methodologies used for evaluating agricultural information products and services [in CTA, 2002. Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP-IMPACT Technical Consultation, Bonn, Germany, 9– 12 October 2001 (CTA Working Document Number 8027; pp. 56–67)]

Cummings, S. 2002. Inventory: Conceptual frameworks and methodologies used for evaluating agricultural information products and services In CTA, 2002. Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001 (CTA Working Document Number 8027; pp. 130–156)]

ECART, ASARECA, CTA, and GTZ. 2000. Workshop on impact assessment of agricultural research in Eastern and Central Africa (Entebbe, Uganda, 16–19 November 1999)

Horton, D., Ballantyne, P., Peterson, W., Uribe, B., Gapasin, D., Sheridan, K. 1993. Monitoring and evaluating agricultural research, A source book, CAB International in association with International Service for National Agricultural Research (ISNAR).

Horton, Jr. F.W. (editor) 2000. Defining and assessing the impact of information on development: building research and action agendas (FID Occasional Paper 16)

Khadar, I. 2002. Evaluation concepts and terminology: A discussion note In: CTA, 2002. Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001 (CTA Working Document Number 8027; pp. 51–55)

McConnell, P. 2000. Building on IRDC’s research program on ‘Assessing the impact of information on decision-making’. In: Defining and assessing the impact of information on

44 Smart tools for evaluating the performance of information products and services

development: building research and action agendas. Horton, F.W. Jr. (ed.). FID Occasional Paper 16. pp 73–105. International Federation for Information and Documentation, The Hague, the Netherlands.

Meadow, C. T. and Yuan, W. 1997. Measuring the impact of information: Defining the concepts. Information Processing and Management 33 (6) 697–714. Elsevier Science Ltd, Great Britain.

Mook, B. 2001. Evaluating information: A letter to a project officer (CTA Working Document Number 8025; 53pp.)

OECD. 2001. Evaluation feedback for effective learning and accountability (report of DAC Tokyo workshop on 'Evaluation feedback for effective learning and accountability', 26–28 September 2000).

45 Smart tools for evaluating the performance of information products and services

1.2

Indicators: between hard measures and soft judgements

Andreas Springer-Heinze, Gesellschaft für Technische Zusammenarbeit (GTZ)

Introduction

A generic definition states that indicators are 'yardsticks' of change, especially of a desired change – the achievement of objectives. This means, they always make reference both to the evolution of the observable “real world” and, at the same time, to the values people attach to the ongoing change. Indicators thus reflect reality and help to interpret it. The box contains a more formal definition, based on a literature review.

Box 1: A definition of indicators

Variables that:

• represent the aggregate status or change in status of any group of persons, objects, institutions, or elements under study; • are essential to a report of status or change of status of the entities under study.”

Source: Jaeger, R., 1978

The definition clearly states the relationship of an indicator to the report for which it is used. Indicators substantiate the answers to the questions posed in an evaluation and therefore depend on the context at hand. In our case, two different tasks are considered, performance assessment and impact evaluation of information products and services.

Both tasks follow a logic that is common to every evaluation, starting by clarifying objectives and leading questions of either the performance assessment or the impact evaluation, and go on to defining a model of reality, that is the (hypothetical) course of events leading to good performance or a desired impact.

In the case of performance assessment, such a model would name the information products and services along with the service providers and clients, and describe how and to which end

46 Smart tools for evaluating the performance of information products and services

the services are provided. These concepts are then used to derive indicators on the efficiency and effectiveness of service provision that are to be measured further on in the performance assessment. The procedure in an impact evaluation is similar – here, the model is basically a pathway that leads from an initial action to the desired changes, the intermediate steps being the output of the action taken, the direct benefit to the users of it and the social improvement achieved. Indicators therefore refer to output, the outcome for users, and the impact on the wider social and economic conditions.

How indicators can be identified is explained in the section below. Here, it is important to remember that indicators have to focus on the issues chosen to be studied in a performance assessment or evaluation, and have to relate to the questions posed by them. The formulation of indicators is followed by the collection of data, the analysis and interpretation of the information and finally the reporting on results.

Identifying indicators

A first step is to start by verifying the analytical model of service performance or of the impact in question and the guiding concerns relevant to the evaluation. They need to be formulated in such a way that they can be assessed.

Step 1 – Operationalise the issues at stake A number of elements in the model may become indicators right away, e.g. the number of users of an information service. Yet, other issues and objectives that come under review in evaluating knowledge are rather abstract, e.g. terms such as the 'pertinence' or 'usefulness' of information, and the 'awareness' or 'capacity' created by providing it. The first step in developing indicators for these objectives is a concept analysis that defines the meaning of the subject under review, breaking it down into the various key aspects and elements. This task is called 'operationalization' and goes as far as defining the concept in terms of concrete things and actions describing its existence.

Box 2: Example of an operationalisation The 'quality of information service provision' can be specified as embracing: • ease of access to the service (quick response to demands, low technical requirements, distance to the point of service) • relevance of the content provided (service takes up on real problems of users, offers choices, offers new perspectives, points to other sources of assistance and complements them) • clarity of the presentation (logical outline, using didactic and illustrative material)

'Learning', the direct benefit of an information product, may be conceptualised as: • improved knowledge (ability to recognise and solve problems) • more conscious behaviour (self-confidence, continued interest and demand in the subject, spreading the message to others) • regular use of skills (number of occasions in which new knowledge is applied)

47 Smart tools for evaluating the performance of information products and services

Instead of addressing the abstract concept, the evaluator tries to ascertain specific items. Each of them may become an indicator.

This task follows theoretical logic on one hand, but it also needs to reflect the views of stakeholders. Whether an information product is really useful, depends on the values of the community served. In order to arrive at an appropriate representation of the issues, turning to the clients for their views is very helpful. Taking a participatory approach may also save time.

Step 2 – Select indicators Step 1 will produce various possibilities to express the meaning of the issue under consideration. The second step therefore, is to select a definite set of indicators to be used. The idea is to provide as many reference points as necessary to capture the essential elements of the basic model. This means, that the performance assessment of an information service should include indicators on the services provided, the clients and their benefits. Impact assessments need indicators on the outputs of a programme, immediate beneficiaries and changes in the wider community.

The number of indicators depends on the resources available for a study. Except for large (scientific) studies it will be difficult to handle more than 10–15 indicators at once. Generally, it is better to have a few, but significant indicators. The degree of detail is an issue here as well: In order not to confound major and minor issues, indicators should roughly be at the same level of abstraction. If a detailed analysis requires to 'zoom in' on a particular aspect, the respective indicators need to be lumped together when the study returns to the 'bigger picture'.

Besides being representative of the basic model, indicators have to be selected according to a number of methodological rules. The following box lists commonly accepted criteria defining good indicators. They should be checked to determine which of the items identified in step 1 should be retained as indicators.

Box 3: Methodological requirements for indicators

Validity Does it measure the condition / result ? Reliability Is it a consistent measure over time ? Sensitivity Will it be sensitive to changes in conditions ? Simplicity Will it be easy to collect and analyse the information ? Utility Will the information be useful for decision-making and learning ? Affordability Can the programme / service provider afford to collect the information ?

Source: Canadian International Development Agency (CIDA), 1999 (slightly adapted).

For obvious reasons, the final set should avoid overlapping or duplication.

48 Smart tools for evaluating the performance of information products and services

Formulating indicators

Step 3 - Specify indicators After determining the elements to be measured, evaluators need to be specific in the formulation of their indicators. Basically, two different principles may be used, viz. a qualitative and/or a quantitative indicator formulation.

Qualitative indicators involve descriptive information. They specify the topic in the form of a question, possibly naming concrete things to look for. The idea is to capture processes and qualitative differences, not to count items. Information is often gathered from individual or group judgements and personal observations. Nevertheless, qualitative indicators can be transformed into quantitative information with descriptive scales (a typology of individual perceptions on an issue) or with nominal scales (e.g. number of good / medium / bad ratings given by observers, or the number of generally positive statements on an issue).

The second principle is based on the quantitative formulation of indicators. Obviously, they can only be used for items that can be counted. Often, quantitative data are generated in the process of providing the service and only need to be collected, e.g. a number of publications or downloads from a website. These data are relatively easy to obtain. Others require explicit measurement based on available statistics or formal questionnaires. The indicator is expressed in terms of a numerical or ordinal scale.

Both types of indicators are completed by adding the unit of analysis in terms of the level of the social system (who - individuals, communities, organisations, networks etc.), the period of measurement (when), and, if relevant, geographical coverage (where). The examples in Box 4 build on the material presented in Box 2.

Box 4: Examples of qualitative and quantitative indicators:

A quantitative indicator for the 'ease of access to an information service': The distance of rural communities in province x to a low-cost telecentre with a helpdesk does not exceed 30 kilometres. or: number of visitors to the telecentre per year

A qualitative indicator for 'improved knowledge': “Positive and negative experiences of information users with solving the problem for which information was sought” or 'Ability of information users to discuss about the problem in question'

In choosing between qualitative and quantitative formulations, in can be said that quantitative indicators are more precise, yet they restrict interpretation to a particular framework of analysis. Qualitative indicators however give a better view of reality as well as a better understanding of the reasons for change. Both types of indicators are 'objective' in their own way. While quantitative data are less easily contested, qualitative information helps to show the relevance of the 'hard facts'. In the context of an open and dynamic learning

49 Smart tools for evaluating the performance of information products and services

process, qualitative information is often more useful for convincing people of a programme and closer to the decision-making situation.

Broad-based studies operating at a general level may not get to the level of detail required for quantitative measurement. For example, instead of looking at the different items determining the 'quality of information service provision' (see Box 2), a qualitative indicator may simply ask for the overall perception and level of satisfaction of users.

Step 4 – Determine a reference base

So as to interpret an indicator value, be it qualitative or quantitative, it is necessary to set a point of reference against which the observation or the measurement can be compared. The ideal reference would be baseline information on the state of an indicator at a historical point in time, referring exactly to the items specified in the indicator (organisational unit, location, etc.). However, except for some simple measures such as, numbers of publications or copies distributed, such information is not normally available. The reason is, that issues evolve over time and so do the evaluation questions. A static comparison (before/after) is only meaningful in purely technical projects – this is much less so in the field of knowledge and social learning.

Besides historical data, there are alternative reference values that may be used. Their basis of comparison is a particular norm, either the objectives of the programme or information service in question or a generic norm. After all, we know what to expect from a particular type of service or product. The following box shows some ways of circumventing the problem of a lacking baseline.

Box 5: Different types of references against which to compare measurements

Trends, e.g. a consistent increase in requests for support, or increasing feedback from readers to a publication Thresholds, e.g. at least three districts covered by a database, or the minimum number of students attending a course Targets, e.g. the number of documents distributed by the end of year 2004, or the proceedings of a conference completed and available in printed form by 2004.

Fortunately, qualitative baselines are much easier to find. All development programmes and at least some information services have documents concerning the problems they are supposed to address, so that the progress made since the inception is not too difficult to discern.

Step 5 – Check on indicator quality

Once the major conceptual and methodological questions of indicator formulation are mastered, it makes sense to apply another type of indicators, this time for the quality of the

50 Smart tools for evaluating the performance of information products and services

indicators proposed to be studied. A widespread formula for achieving this, are the SMART criteria of indicator quality.

Box 6: The SMART criteria of indicator quality

Indicators ought to be S = specific, yet simple M = measurable A = ambitious yet achievable R = realistic T = time-bound

These criteria are used for a final check on the indicator. It is not easy to satisfy all criteria at once. For the sake of a cost-effective evaluation, it is better to leave aside indicators that are too ambitious or costly to measure.

From this last step, the evaluator may return back to earlier stages of the analysis. One should keep in mind that indicator formulation is an iterative procedure. Often, the formulation of indicators in itself is already part of the evaluation, especially if done in collaboration with stakeholders.

Using indicators for the assessment of information products and services

After the measurement of indicators, an interpretation of the data is necessary in order to arrive at a judgement concerning the social utility of the knowledge products and services. The information provided by the indicator only gains significance in a contextual analysis that clarifies why indicator data are at the level they are and why they have (or have not) changed.

The interpretation of indicators is related to step 1 in the sequence presented above. The difference is, that interpretation takes the reverse direction, reconstructing the original concepts from the empirical information generated. Other information obtained during the evaluation process (outside of indicator measurement) will often turn out to be extremely important to actually understand the results. Qualitative indicators offer more possibilities in this respect.

The particular indicators for the different categories of information products and services are covered by the respective guidelines.

51 Smart tools for evaluating the performance of information products and services

Some pitfalls

When it comes to working with indicators, some words of warning are in order. One risk concerns the illusion of objectivity that indicators may create. In fact, indicators are tools that the evaluator has to use critically. Basing the judgement on standard indicators can be dangerous, as measurement without theory leads to an invalid attribution of the data or to useless information because of the missing causal connections. A similar mistake is to jump too quickly from analysis to measurement, without due consideration of the 'evaluatability' of a programme (see the criteria in the boxes 3 and 6). Sometimes, it can be better to formulate questions rather than indicators. Especially quantitative indicators, reducing reality to a few numbers, may be misleading without a clear (and rather qualitative) understanding of the context. This is particularly true in the field of social learning, that is characterised by gradual and long-term change, differing views of people, and the cultural dimension of social change.

References CIDA 1999. Results-based Management in CIDA: An introductory guide to the concepts and principles. Ottawa, Canada.

Jaeger, R. 1978. About educational indicators. In L.S. Shulman (ed.) Review of Research in Education, 6, pp 276–315.

Moldan, B. and Billharz S (eds).1997. Sustainability Indicators – Report of the Project on Indicators of Sustainable Development. London: Wiley

(

52 Smart tools for evaluating the performance of information products and services

1.3

Production, validation, strategy and status of the toolkit

Karen Batjes and Sarah Cummings, KIT (Technical editor) ( KIT)

Introduction

Managers of information services and projects are increasingly expected to evaluate their own information products without a firm grounding in evaluation and without suitable tools to help them on their way. In addition, this lack of understanding of the difficult model of impact assessment makes their job much more difficult. This is reinforced by the multiplicity of approaches and lack of clarity of concepts. Hence, the need for working methodologies and comparative tools that are specific to the information sector. Without agreement on the choice of performance and impact indicators and evaluation procedures, it is difficult to draw meaningful and comparable lessons from any evaluation exercise.

To try to fill this gap, CTA and ISNAR have worked on developing the manual ‘Evaluating information: a letter to a project officer’ (Mook 2001) which goes some way in demystifying the process of evaluation for information professionals. In the LEAP IMPACT community, there was a general feeling that this should be further complemented by simple tools, which can guide the practitioner through the process. For example, the need for the development of ‘best practice’ in the field of evaluation of information was noted at the Bonn workshop. The need to ‘facilitate the development and use of practical and cost effective methods for evaluating the performance and impact of agricultural information products and services’ was also seen as important (CTA 2002). However, the creation of the toolkit is being informed by a joint understanding of the CTA/ISNAR manual, which is perceived as being the starting point for the toolkit.

This paper describes the production, strategy, validation and status of the ‘smart tools’ toolkit. Guidelines for writing smart tools are presented, and a structure for the toolkit is proposed. The toolkit represents work in progress and plans are subject to amendment as the project develops. However, there is no doubt that the production of the ‘smart tools’ calls for a great degree of cooperation and commitment on the part of the writers.

53 Smart tools for evaluating the performance of information products and services

Toolkit

Focus The toolkit focuses on performance evaluation and not impact assessment. It aims to facilitate performance evaluation for self-assessment, motivated by self-learning and not by external donors. The emphasis will be on good project management practice. The tools are designed to form part of an easy-to-use toolkit, which will encourage information managers and practitioners to incorporate evaluation into projects to act as a guide and management tool in the first instance. Later, as the culture of evaluation is developed, it is envisaged that practitioners will be better placed to tackle the longer-term and more complex issue of impact assessment. The toolkit aims, in particular, to meet the needs of information managers or practitioners located in small or resource poor information services, who do not have access to specialised evaluation expertise or experience. The toolkit also aims to help those practitioners in the North and South who are interested in carrying out performance evaluations.

Although the toolkit aims to support self-evaluation, it will be useful for information managers whose projects and programmes are undergoing review by an outside agency. It will help them to understand the approach taken by the evaluators and the complex terminology – and will hopefully assist in an empowering process related to the evaluation.

One of the sub-themes related to the toolkit was the need to develop a terminology accepted by the community, which could be used by others in the information field. The problems of this difficult terminology have been identified by Khadar (2002). To overcome these problems, it was decided that a glossary should be developed.

Format The conception of the smart tools came from a number of publications in the development field. These include the ‘RAAKS resource box’ (Engel and Salomon 1997) and a manual and tools developed by the African Network on Participatory Approaches (2000). Both of these are boxed sets which comprise a manual or methodology in book form, together with a set of cards called windows or tools. This model was also chosen for the ‘smart tools’ with the idea that users could pick up the tools, which would be on card and apply them without any background reading. Support materials and background information available in the associated box would back up the tools.

Objectives The main objectives of the toolkit are to provide:

• Practitioners with clear, easy to guide tools for carrying out an evaluation from the preparatory stage till completion of the report and its dissemination • The target groups with access to a body of knowledge which represents the accumulated experience of information practitioners and managers working in the area of the evaluation of information that is not available to a wider audience • Access to tools designed for situations where the main purpose of the evaluation is performance assessment in order to improve the level of decision-making

54 Smart tools for evaluating the performance of information products and services

Production and strategy

With respect to expectations governing the production of the toolkit, perhaps it is useful to look at the expected outputs of the workshop that relate to its production and strategy:

• Output 2 – Validation of the draft tools that practitioners can use to manage and evaluate the performance of specific types of agricultural information activities • Output 3 – Strategy for future development and refinement of the tools and publication of the toolkit after the workshop • Output 4 – A selection of teams to develop and refine the additional tools and a timetable drawn up for their development • Output 5 – Formation of an Editorial Committee to follow up the process in outputs 2 and 4

Clearly, the realisation of these outputs is important to the success of the toolkit. First, however, agreement from the workshop has to be reached concerning the proposed scope and structure for the guidelines.

General tool guidelines The individual tools are designed to represent a concise guide to potential approaches and strategies for information practitioners and managers who are using the toolkit. The tools will have a standardised format, so that they can be integrated into a comprehensive kit.

Each tool should cover the following areas:

• It should be couched within the context of the performance. • What are its objectives? • What is the use of the tool and what are the links with other tools in the kit? • The conditional element should be built into the tool. For example, you can use the tool for a given purpose, but there may be other approaches or different ways of applying the tool • Methodology / procedures involved for this tool. • List possible problems that may be encountered in using the tool. • What are its advantages? What are its disadvantages? • Timeline and responsibilities: who will be doing what and when? • Who is responsible for the distribution of output? • Examples of how the tools can be applied should be brought out in the case studies. • Key terms and concepts and their definitions should be identified (these are to be fed into the concept and methodology tool – glossary).

55 Smart tools for evaluating the performance of information products and services

The language used in the toolkit must be clear, so that the steps involved in the evaluation process will be easy to follow. The use of bullets is encouraged to facilitate easy reading and enhance the presentation of the document. For each tool, contact details of writers, as resource persons, should be given. In addition, all collaborators on the tool will be listed, acknowledging each individual’s work.

Target groups The toolkit aims, in particular, to meet the needs of:

• Information practitioners who are faced with the challenge of having to evaluate an information project. • Those information managers or practitioners, working in resource poor information services, who do not have access to specialized evaluation expertise or experience. • Those individuals who do not have the time to access or cannot access the large and complex body of literature on the evaluation of information.

The process At the time of writing, about 16 draft tools have been written in preparation for the workshop. The authors were identified based on their experience in the various information activities.

Toolkit writing requirements The individual tools are designed to be a clear guide to potential approaches and strategies for information practitioners and managers. Efforts will also be made to coordinate the tools so that they can be integrated in a systematic way. The language should be clear and simple and the use of bullets is encouraged. The tools should be developed along the following lines:

• A description of the tool should be given; • What are the objectives of the tool? • What is use of the tool? • What is the methodology / procedure(s) involved for this tool? • What are its advantages/ disadvantages? • Key terms and concepts and their definitions should be identified (to be agreed on and fed into the concept and methodology tool).

Acknowledgements Lead writers and co-writers (collaborators) contributing to the development of each tool will be considered as the authors of the tool: their names will appear on the tool. In addition to this, the writers will be listed as resource persons that the users of the tool may contact if they have any further queries or can suggest ways in which the tool can be improved.

56 Smart tools for evaluating the performance of information products and services

Resource persons Contact details of writers, as resource persons, should be given for each tool.

Structure of the toolkit The toolkit has four sections. The first two sections contain the tools, which are useful in evaluating information products and services: preparatory work tools and process tools. The third section comprises tools for evaluating specific information activities. Finally, the case study tools will describe practitioners’ experience with specific case studies, emphasizing lessons learned.

Section 1

These preparatory work tools (PW) lay the basis for the evaluation, identifying planning tasks such as determining the purposes, questions and methods of the evaluation. They comprise:

PW 1: Concepts and terminology (Glossary) PW 2: SWOT analysis PW 3: Logical framework PW 4: Other methods

Section 2 Process tools (P) are used to help carry out the evaluation and cover initial orientation. They comprise:

P1: Terms of reference (TOR) P2: Indicators P3: Data collection P4: Questionnaire design P5: Data analysis P6: Writing and dissemination of the evaluation report

The discussion of tools in Sections 1 and 2 is not expected to be extensive, given that adequate literature already exists to guide practitioners.

Section 3

This section looks specifically at various information services and how they can be evaluated in terms of their management and performance – for example, effectiveness/ efficiency/ relevance/ sustainability/ gender balance/ accessibility/ timeliness/ accuracy:

A1: Seminars A2: Newsletters A3: Web sites A4: QAS A5: Libraries A6: Networks

57 Smart tools for evaluating the performance of information products and services

An outline on how to tackle the evaluation of specific information products and services is presented here:

• A description of the product/ service. • The objective of the evaluation. • Identification of key stakeholders. • Key points for the TOR including the reason for the evaluation, responsibility and accountability of those involved. • Appropriate methods to be used to determine its performance. • Indicators to be used. • Data collection methods. • Strategically selecting who receives the report.

Section 4

This section looks at specific case studies relating to the different information programmes identified in Section 3.

CA1: Seminars CA2: Newsletters CA3: Web sites CA4: QAS CA5: Libraries CA6: Networks

The case studies will be used to refine the tools. They are also selected to illustrate specific issues and provide insights into how the products/services can be further improved on or developed. They will also allow the practitioner to judge whether the lessons learned would apply in another context. The suggested guidelines are:

• Title of the product/ service and its location. • Situational analysis: summary of why the product or service was developed, what it was expected to achieve (objectives). • Components of the product/service. • How it achieved its results. • Lessons learned: what factors were crucial to its success and why? • What did not work and why? • How would you have done it differently based on the lessons learned.

58 Smart tools for evaluating the performance of information products and services

• Names of people who can be contacted to learn more about the particular product/ service.

General • Reference materials and recommendations for further reading are welcome. • Tables/figures should be used where possible to guide the practitioner. • The time required to undertake the various tasks for the tool, as well as resources required such as skills and materials should be given.

Length of tool The length of the tool is expected to vary from 1–4 pages.

Current status of the toolkit

The status of the toolkit at the inception of the workshop is as follows:

• Drafts of most of the tools relating to preparatory work and process tools have been completed (see Table 1). • With respect to products and services, draft toolkits have been drawn up, apart from networks. • Three case studies have been completed. Workshop participants are expected to provide the additional examples. • In terms of content: some of the tools need to be further developed, while others need to be tightened up and structured along the lines of what is required with respect to performance evaluation. For example, there was some confusion in the use of concepts such as performance and impact – performance was often seen as part of impact.

59 Smart tools for evaluating the performance of information products and services

Table 1: Current status of the toolkit Tool category Writer Co-writer Submitted

Preparatory work

Concepts and terminology Margo Kooijman Ibrahim Khadar * (Glossary) SWOT analysis Lola Visser-Mabogunje * Logic model Byron Mook * Other methods

Process

Terms of reference Sarah Cummings * Indicators Andreas Springer–Heinze * Data collection Lisette Gast * Data analysis Lisette Gast * Questionnaire Sarah Cummings * Writing and dissemination of Ibrahim Khadar Karen Batjes * the report

Products and services

Seminars Modupe Akande * Newsletters Sarah Cummings Bruce Lauckner * Web sites Lisette Gast * QAS Simon Osei Sarah Cummings * Libraries Herman v. Dyk * Networks

Case studies

Seminars Newsletters Christine Kalume * Web sites Maartje op de Coul * QAS Joel Sam * Libraries Networks

* – submitted

Validation of the toolkit

Questions pertinent to the whole validation process include:

• How can we know that the product that we are intent on producing will meet the needs of the intended practitioners? • How do we know that the tools will work? These concerns will be addressed by the workshop participants who represent an experienced group of information specialists. Many of the discussions and contributions that take place will be based on that experience.

60 Smart tools for evaluating the performance of information products and services

Other critical questions that need to be asked include:

• Do we accept the current structure and classification or do we need a team of people to look at it more closely? • Is the scope of each tool adequate for the subject or purpose? • Are the proposed steps in carrying out the evaluation necessary? • What are the deadlines for the production of the toolkit?

The products elaborated on during the workshop will be made available to a wider group of practitioners who will use these tools and further develop and refine them, permitting their further validation.

Conclusion

A major challenge of this workshop is to provide clear ideas and guiding principles with respect to performance evaluation, producing a toolkit which is based on an agreed set of guidelines and structure; and one that is relevant to the needs of practitioners and easy to use. It is envisaged that an Editorial Committee, to be formed at the end of the workshop, will ensure that these goals are fulfilled.

The tools need to be tested in the wider community so that they can be fine-tuned and validated. Mechanisms have to be found to promote interactive feedback with practitioners in the field. The LEAP IMPACT platform already provides a useful basis for this work. Perhaps the group can also take up the challenge to get more involved, committing the support of the various organisations represented here today to take the toolkit even further.

References

African Network on Participatory Approaches. 2000. Village participation in rural development: manual and tools. Amsterdam, KIT/World Bank, 2000.

CTA, 2002. Assessing the performance and impact of agricultural information products and services. Summary report and recommendations of a CTA/IICD/LEAP-IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001.

Engel, P.G.H. and Salomon, M.L. 1997. Facilitating innovation for development: A RAAKS resource box. Amsterdam, KIT/CTA/STOAS.

Khadar, I. 2002. Evaluation concepts and terminology: a discussion note. In CTA, 2002. Technical consultation on assessing the performance and impact of agricultural information

61 Smart tools for evaluating the performance of information products and services

products and services. Proceedings of a CTA/IICD/LEAP-IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001 (CTA Working Document Number 8027)

Mook, B. 2001. Evaluating information: a letter to project officer (CTA Working Document Number 8025; 53pp.)

62

Part 2

Working Group discussions

Smart tools for evaluating the performance of information products and services

2.1

Discussions on the toolkit

The following presentation is based on discussions during the plenary and working sessions. The main objective was to arrive at a consensus on how to develop the toolkit further.

Focus

The following decisions were made at the workshop:

• The focus of the toolkit is on performance evaluation (not impact assessment). • The toolkit is aimed at performance evaluation for self-assessment – motivated by self- learning. Emphasis is on good project management practice. • The tools are designed to form part of an easy-to-use toolkit, which will encourage information managers and practitioners to incorporate evaluation into projects to act as a guide and management tool in the first instance. Later as the culture of evaluation is developed, it is envisaged that practitioners will then be better placed to tackle the longer- term and more complex issue of impact assessment. • The toolkit aims, in particular, to meet the needs of isolated information managers or practitioners located in small or resource poor information services, who do not have access to specialised evaluation expertise or experience. The toolkit also aims to help those practitioners in the North and South who are interested in carrying out performance evaluations.

Structure of the toolkit

The toolkit will be divided into six modules. The first module contains the glossary, followed by the Introductory module, which will present a general overview of the toolkit and how it is to be used.

The third module – the preparatory work module (PW), lays the basis for the evaluation. For example, preparatory work tools include:

PW1: Planning your evaluation (e.g. logframe, logic model) PW2: SWOT analysis PW3: Terms of reference (TOR)

65 Smart tools for evaluating the performance of information products and services

The fourth module concerns process tools (P) which are used to help carry out the evaluation. The tools are: P1: Indicators P2: Data collection P3: Questionnaire design P4: Focus groups P4: Interviews P4: Case studies/stories/anecdotes P5: Data analysis P6: Writing, dissemination and utilisation of the evaluation report P7: After action review

Module five looks at activities (A), more specifically at how information services and products can be evaluated:

A1: Seminars A2: Newsletters A3: Websites A4: Question and Answer Services (QAS) A5: Libraries

Finally, the sixth module will present a number of case studies (CA) relating to similar information services outlined above, which have been evaluated:

CA1: Seminars CA2: Newsletters CA3: Web sites CA4: Question and Answer Services (QAS) CA5: Libraries

General writing requirements

The individual tools are designed to represent a clear guide to potential approaches and strategies for information practitioners and managers who are using the toolkit. The main emphasis will be on clarity. Efforts will also be made to coordinate the tools so that they can be integrated into a systematic and coordinated whole.

The tools should generally cover the following areas:

• It should be clear what the tool is and it should be couched within the context of the performance evaluation i.e. – how the tool can be used within the context of the evaluation and at what level. • What are its objectives? • What is the use of the tool? – it should be linked to other tools in the kit.

66 Smart tools for evaluating the performance of information products and services

• The conditional element should be built into the tool – for example, you can use the tool, but there may be other approaches or different ways of applying the tool. • Methodology / procedures involved for this tool. • Give possible problems that may be encountered in using the tool. • What are its advantages? What are its disadvantages? • Timeline and responsibilities – need to know who will be doing what, without this it will be difficult to carry out the evaluation. • Next steps – who is responsible for the distribution of output? • Examples of how the tools can be applied should be brought out in the case studies. • Key terms and concepts and their definitions should be identified (these are to be fed into the concept and methodology tool – glossary).

Style and language The language used should be clear and simple, so that the steps involved in the evaluation process will be easy to follow. The use of bullets is encouraged to facilitate easy reading and enhance the presentation of the document.

Resource persons For each tool, contact details of writers, as resource persons, should be given.

Refer to Table 1 for the list of writers, responsibilities and status of the toolkit.

Table 1: Responsibilities and status of the toolkit Tools Persons responsible What needs to be done

Glossary – Concepts and Margo Kooijman/ Ibrahim Writers to feed the terms into this tool terms Khadar Introductory module Sarah Cummings/ Margo Overview and scheduling of tools – Kooijman/ Ibrahim Khadar/ Sarah Cummings and Margo Kooijman Andreas Springer-Heinze Conceptual approach – Ibrahim Khadar Role of indicators – Andreas Springer- Heinze PW Preparatory work tools PW1 Planning your evaluation Byron Mook / Margo Introduction: include a paragraph on (logframe and logic Kooijman the models in the context of model) evaluation; differentiate between the models

67 Smart tools for evaluating the performance of information products and services

Table 1 continued: Responsibilities and status of the toolkit Tools Persons responsible What needs to be done

PW Preparatory work tools

PW2 SWOT Analysis Lola Visser-Mabogunje The introduction to include a paragraph on SWOT in the context of evaluation PW3 TOR Sarah Cummings/Allison Revision based on Working Group Hewlitt discussions

P Process tools

P1 Indicators Andreas Springer-Heinze To be written P2 Data collection Lisette Gast, Bruce Lauckner, Agreed to look at how data collection Kingo Mchombu, Sarah /data analysis/questionnaire design fit Cummings together P3 Questionnaire design Sarah Cummings/Allison Revise in line with recommendations Hewlitt coming out of discussions on focus groups/interviews/case studies P4 Focus groups Maartje op de Coul/ Christine This is be further discussed and P4 Interviews Kalume/ Lisette Gast / developed P4 Case studies/ stories/ Daniel Thieba/ Allison anecdotes Hewlitt P5 Data Analysis Lisette Gast to lead Revise based on Working Group discussions P6 Report writing and Ibrahim Khadar /Karen Need to develop a utilization strategy dissemination and Batjes/ Andreas S-Heinze to ensure that the results of the utilization evaluation are used P7 After action review Allison Hewlitt / Others to To be incorporated into this module review A Activities

A1 Seminars Modupe Akande/Joel Sam Revise based on Working Group discussions A2 Newsletters Bruce Lauckner / Sarah Revise based on Working Group Cummings discussions Christine Kalume to give comments A3 Web sites Lisette Gast / Ivan Kulis/ Testing of tool to produce case study Maartje op de Coul Comments: Boubacar Diaw A4 QAS Simon Osei/Sarah Cummings Revise based on Working Group / Herman v Dyk discussions

68 Smart tools for evaluating the performance of information products and services

Table 1 continued: Responsibilities and status of the toolkit Tools Persons responsible What needs to be done

A Activities

A5 Libraries / Resource Herman v. Dyk / Boubacar Herman to take the lead Centres Diaw To approach organisations to test the Comments: Christine tool – KIT, etc. Kalume/ Margo Kooijman and others New Networks Ivan Kulis (to be confirmed) To be developed New Radio Programmes Maartje op de Coul to follow To be developed up with colleagues and AMARC/ Allison Hewlitt to follow up Multimedia To be explored – Databases To be explored – Case studies CA1 Seminars – – CA2 Newsletters Sarah Cummings / Christine Sarah Cummings to coordinate the Kalume / Bruce Lauckner three case studies CA3 Web sites Lisette Gast / Maartje op de Testing of tool to produce case study Coul CA4 QAS Joel Sam / Simon Osei/ CTA To be further developed

CA5 Libraries Margo Kooijman (Margo Kooijman to ask KIT-library to validate the activity tool developed for the library. The International Information Centre and Archive for the Women's Movement (IIAV) may also be interested Editorial team Sarah Cummings/ Herman Dyk/ Lisette Gast/ Margo Kooijman/ Karen Batjes Review process Peer reviewers – LEAP IMPACT community

General • Reference materials and recommendations for further reading should be included. • Tables/figures should be used where possible to guide the practitioner. • The time required to undertake the tasks for the tool as well as resources required such as skills, materials, etc., should be given.

69 Smart tools for evaluating the performance of information products and services

Length of tool The length of the tool is expected to vary from 1–4 pages. With respect to the case studies, page length is not a limiting factor – it is however, expected that the studies will be documented as concisely as possible.

Detailed content and structure of the toolkit

Glossary Concepts and terminology will form the glossary. Further development of this section is largely dependent on the content of the other tools and what the other writers feed into it.

Introductory module • Overview of the toolkit. • Look at project management to get the bigger picture with the respect to the role of evaluation as well as to put the toolkit into its proper context. There is a need also to look at the scale of the evaluation, e.g. the management context of the programme – what has come out from the overall programme, the project, or the service (i.e. need to define the level at which the evaluation is taking place). • Conceptual approach – explain how performance and impact assessment fit into the whole process. The person reading the material should have a clear idea about the performance and impact evaluation at the different levels and how they are related. This should be linked to the operational framework for information management (the ERMi). • This section should also discuss how to implement an evaluation – do you set it up with a consultant, or team within the organisation concerned. • Role of indicators: the story needs to be told about the importance of selecting good indicators and the possible pitfalls in going about it in the wrong way. • Scheduling of the tools: rationale behind the classification of the tools e.g. placing SWOT before TOR etc.

Preparatory work module (refer to the outline given for the tools under the section 'General writing requirements') The preparatory tools lay the basis for the evaluation, identifying planning tasks such as determining the purposes, questions and methods of the evaluation.

Preparatory work tools (PW) comprise:

PW 1: Planning your evaluation (logframe / logic model/ service models) This section will document how the logic model, logframe and service models are used in planning and carrying out evaluations. It is possible that for some evaluators these models will never be relevant for the type evaluations they wish to do, however, it is still an important section because these terms are used regularly in evaluation and evaluators should at least be aware of their meaning and uses. PW 2: SWOT analysis

70 Smart tools for evaluating the performance of information products and services

PW 3: TOR

Process module (refer to the outline given for the tools under the section 'General writing requirements') Process tools are used to help carry out the evaluation and cover initial orientation. Process tools (P) comprise:

P1: Indicators P2: Data collection P3: Questionnaire design P4: Focus groups P4: Interviews P4: Case studies/Stories/Anecdotes P5: Data analysis P6: Writing, dissemination and utilisation of the evaluation report P7: After action review

Activity module This section looks at the following information services and how they can be evaluated in terms of their management and performance.

A1: Seminars A2: Newsletters A3: Web sites A4: QAS A5: Libraries

An outline on how to tackle the evaluation of specific information products and services is presented below:

• A description of the product/ service. • Preparation – part of the project cycle ERMi (performance-impact evaluation of information programmes operational framework), what are the expectations; time for implementation. • Determine the objective of the evaluation – client satisfaction; frequency of use, relevance, user friendliness, sustainability, reach. • Define the sample group: Mapping and scope

- Identification of key beneficiaries, stakeholders

- Resources, infrastructure, content

- Service process

71 Smart tools for evaluating the performance of information products and services

• Select methods Methods

- Indicators – sequence levels: at what level is the evaluation being carried out? If it is at the process and management level then we are looking at the management, efficiency and financial sustainability of the service. If we are looking at the products and services level then we are looking for indicators relating to relevance, demand gender balance, user satisfaction, effectiveness etc., see Table 2

- Data collection – questionnaire design (the traditional way of doing this is failing, therefore need to explore other ways of gathering data such as informal feedback focus groups)

- Data analysis – the methods used are dependent of the type of data collected

- Report writing – write with the target group in mind, taking into consideration the main message you want to convey • Reporting

- In developing the tool, careful attention should be paid to the context in which the product service is being evaluated.

- Graphics and matrices should be used where possible to increase readability and understanding on how to perform the evaluation.

- The methods used to evaluate the tool should be based on the ones already provided in the toolkit –- no new methods should be introduced at this stage.

- There should be flagging and signposting of the tools for easy referencing.

- Costs in terms of time and resources need to be worked out in more detail.

In streamlining one's thoughts with respect to the purpose of the evaluation, indicators and data required, a useful table to set up during the evaluation is given in Table 2.

Table 2: Purpose (focus) Indicator Data collected Needs Planning Efficiency Financial sustainability Relevance Demand User satisfaction Gender balance Effectiveness Accessibility Timeliness Accuracy

72 Smart tools for evaluating the performance of information products and services

Case study module This section looks at specific case studies relating to the different information programmes identified in the Activity module:

CA1: Seminars CA2: Newsletters CA3: Web sites CA4: QAS CA5: Libraries

The case studies allow the practitioner to judge whether the lessons learned would apply in another context. The studies also offer a detailed application of the tool and as test for the tools developed in the previous section.

The suggested guidelines are:

• Title of the product/ service and its location • Situational analysis: summary of why the product or service was developed, what it was expected to achieve (objectives) • Components of the product/service • How it achieved its results • Lessons learned: what factors were crucial to its success and why? • What did not work and why? • How would you have done it differently based on the lessons learned • Names of people who can be contacted to learn more about the particular product/ service.

73

Part 3

CTA Resource book

Smart tools for evaluating the performance of information products and services

3.1 The proposed CTA book on: Assessing the impact of information on development

Kay Sayce Publishing consultant

In mid-2002, preliminary discussions were held at CTA on implementing a project to produce a book on assessing the impact of information on development. CTA’s work in programme evaluation has grown significantly in recent years and its evaluation strategy has evolved towards greater and more systematic use of evaluation results for organisational learning and decision- making. It is against this background, and through extensive inter-agency collaboration, that the book on impact assessment will take shape over a period of about 2 years. Three phases are envisaged:

• Phase 1 (2002) involves conducting an extensive literature review, establishing a Steering Committee, Expert Group and ACP Partnerships Group, and attending workshops, with the aim of producing a proposed outline of the book’s approach and contents. • Phase 2 (2003) involves interviews in the field, keeping pace with the literature, liaising with the Committee and Group members, and putting together the first draft of the book. • Phase 3 (late 2003–2004) involves revising and finalising the work to produce the final draft, to be produced in print format, PDF format and interactive electronic format.

The Steering Committee, selected on a regional basis, consists of Modupe Akande (West Africa), Paul Engel (Europe), Anriette Esterhuysen (Southern Africa), Bruce Lauckner (Caribbean), Adiel Mbabu (Eastern and Central Africa) and Mohammed Umar (Pacific). The Expert Group consists of Rick Davies (MandE newsletter), Doug Horton (ISNAR), Christine Kalume (Healthlink), Kingo Mchombu (University of Namibia), Michel Menou (City University, London), Byron Mook (ISNAR) and Rob Vincent (Healthlink). In the ACP Partnerships Group are Arlington Chesney (IICA), Claudette de Freitas (Caribbean Agricultural Research and Development Institute), Marcus Hakutangwi (SCC-Zimbabwe), Isaac Minde (Eastern and Central Africa Programme for Agricultural Policy Analysis), Thiendou Niang (Regional Economic Partnership Agreement), Fernand Sanou (Cenatrin) and Papa Seck (Institut sénégalais de recherche agricole).

The subject of impact assessment is a controversial one, attracting a range of heated opinions as to what impact is and how to measure it. The proposed book will step into the middle of this debate. It will set out not to provide solutions but rather to pose questions – questions about approaches, concepts, issues, needs:

• Why does impact evaluation matter? Smart tools for evaluating the performance of information products and services

• For whom should it be done? • When should it be done? • How do you solve the attribution problem? • Are information-specific approaches needed? • Who should select the indicators of impact?

To discuss these questions and comment on the proposed outline of the book, nine people from the Steering Committee and the Expert Group (Akande, Engel, Horton, Kalume, Lauckner, Mbabu, Mchombu, Mook and Vincent) met during the Amsterdam workshop, together with Andreas Springer-Heinz (a leading evaluation expert from GTZ) and the book’s coordinators Ibrahim Khadar (CTA) and Kay Sayce; secretarial services were provided by Debbie Kleinbussink (CTA).

The book outline

The book outline presented to the meeting noted that the target audience consisted of development project managers, extensionists, researchers and practitioners in the evaluation and impact assessment field. The broad aim of the book was to contribute to the debate on impact assessment. Its flavour should therefore be dynamic, with its structure having the semblance of a lively debate and simultaneously lending itself to an interactive format. Thus, there would be both a printed book and, to carry the debate forward, an electronic version which would be subject to frequent revision and updating.

The book would be divided into three broad sections, tentatively entitled ‘Context’, ‘Controversy’ and ‘Concourse’. The ‘Context’ would be the preamble. It would look at the history of impact assessment trends and approaches, the current imperative to demonstrate programme impact, and whether specific approaches are needed to assess the impact of information.

The ‘Controversy’ section would account for the bulk of the book. It would revolve around case studies and analyses of these studies by various experts, drawing out the main impact assessment issues. Much work would be needed not only on selecting case studies and identifying experts to analyse them, but also on establishing what the main issues were. The book outline listed, as an example for discussion at the Amsterdam meeting, the main issues identified by International Food Policy Research Institute researchers:

• Scale, attribution and time horizon • Supply-side versus demand-side approaches • Importance of surprise • Choice of indicators • Case study/methodological research issues • Time lags • Ex ante and ex post assessments

78 Smart tools for evaluating the performance of information products and services

The ‘Concourse’ section would bring together the strands, highlight the loose ends, comment on where current thinking seems to stand, link back to the contextual points where appropriate and then provide thoughts on the way forward. This would be the message section.

There would also be various annexes. These could include a description of the electronic version, a bibliography, CTA’s evaluation programme and products, a glossary of terms and concepts, an inventory of current research and studies, useful contacts, and perhaps something on how the book had been put together.

Some questions and some quotes

Presented with the book outline was a list of questions and quotes relating to impact assessment, to stimulate thoughts and discussion during the meeting.

A sample of the questions…

• What do we mean by impact assessment? The analysis of sustainable changes introduced by a project in relation to specific objectives? A study of overall changes, intended or not, possibly/probably related to the project? • Can impact assessment studies be donor-funded without being donor-driven? • Where is it best to start an impact assessment exercise – at the beginning (‘project-out’) or at the end (‘context-in)? • How can the need for accountability be balanced against the need for learning? • How do you solve the attribution problem? Do approaches such as the ‘transformational path’, ‘pathway analysis’ or ‘outcome mapping’ go some way towards a solution? • In information impact, should the object of assessment be information content or information channel or both? And how do you distinguish between ‘information’, ‘communication’ and ‘knowledge’? …and some of the quotes

• If impact is significant or lasting, the key questions are: what has changed, is it significant, to what extent is it attributable to the project, and who decides this? • Impact assessment should focus on results – not judgements. • Linear cause-and-effect thinking contradicts the understanding of development as a complex process that occurs in open systems.” • Impact assessment is the back end of strategy. It is where we have to deal with the consequences of poorly digested theories of change. This is why, when we look at the contents of most logical frameworks, we ask ourselves ‘What is this?’ • The conflict between ‘top-down, results-based’ approaches and ‘bottom-up, client based, approaches’ seems contrived… Both approaches have their value. The art surely lies in developing the right mix.

79 Smart tools for evaluating the performance of information products and services

• It is fruitless to ask – ‘What is the impact of information on development?’ Better to ask ‘What are the necessary conditions for information to be able to influence development?’

The Amsterdam meeting

Time constraints meant that the meeting focused almost entirely on the ‘Controversies’ section of the book outline presented by the coordinators.

The initial reaction to the outline related to the target audience and the aim of the book. The group felt that the audience needed to be more clearly defined and the aim of the book made clearer and more distinct from the rest of the pack. The question was also raised as to whether the book should restrict itself to impact assessment as opposed to impact orientation. And there was much comment on the need to be clear about the definitions of ‘information’, ‘communication’ and ‘knowledge’.

The coordinators elaborated upon the aim of the book. CTA’s involvement in evaluation studies in recent years has shown that when the subject of evaluation is raised, hot on its heels comes the subject of impact assessment, and brows become furrowed and opinions heated as everyone expresses different views on this subject. The aim of the book is to open up and look at these different views, to look at what people are arguing about, and why. Along the way, issues should become clearer and concepts better defined. It was stressed that the book was not intended to be a ‘how to’ manual, but rather a look at the issues behind the ‘how to’.

The discussion then turned to what was to become a central theme of the meeting – the relationship between donor and recipient. An understanding of this issue is crucial to moving the impact assessment debate forward. One participant noted that in a book on evaluating capacity development he is currently working on, the donor-recipient relationship has emerged as the main issue. Another participant said the book should tease out some of the perceptions that donors have about impact assessment and show how these perceptions conflict with reality. The world has changed and if the book addresses this change in so far as it affects impact assessment, particularly in relation to donor perceptions, it will be a worthwhile and valuable product.

The subject of perceptions led to suggestions for an important change in the book outline as presented. While the broad structure of the book was accepted by the meeting, with favourable comments upon its potential value, approach and adaptability to an electronic forum, the question of sourcing, selecting and analysing case studies caused concern. The participants’ contributions on the donor perception issue stimulated the group into looking for an alternative approach to using case studies, and two main points emerged:

• Instead of impact assessment case studies, think instead of impact assessment stories. • Seek out these stories not only from donors, but from all actors involved in impact assessment, at all the different levels of involvement.

Thus, the actors in impact assessment become the entry point, and their stories illustrating different experiences and perceptions of impact assessment will draw out the issues and illuminate the debate. Using a theatrical analogy, one participant likened the development

80 Smart tools for evaluating the performance of information products and services

process to a stage play, and the stakeholders in impact assessment – from donors, policy-makers, development managers and researchers to evaluation practitioners, extensionists, community leaders, farmers and the woman who sells vegetables in the street market – the actors in the play, reacting in different ways to each other and to the plot.

It was considered that such an approach would expose not just the perception gaps between the actors involved in impact assessment, but also the illusions about impact assessment which cloud opinions and hamper collaboration and progress. It would also highlight the need to recognise that assessing impact is a complex and dynamic process, set in the context of constantly shifting goals and horizons. And making the actors’ domains the basis of the book would introduce what some participants considered a necessary practicality into the book. This approach might also help clarify the differences between information projects/programmes and information products/services, and elucidating the distinction between information, communication and knowledge.

It was proposed that the approach for gathering impact assessment stories should be to ask various actors in the different domains to recount their experiences of impact assessment. Managers’ stories about efforts to improve the evaluation of information products and services, for example, had the potential to be very powerful and revealing; the question to ask of those who conduct evaluations could be something along the lines of: What did you do to improve the impact of your organisation/ project/ service/ product, and how did you assess what impact you achieved? This should draw out the main factors – information, impact, evaluation, learning and intervention. Everyone in development tries to enhance impact, and so everyone has an idea, a perception, of impact; the book is the place to produce all these ideas and perceptions.

The stories should be gathered first-hand, through a journalistic exercise involving interviews and questionnaires. They should be rich and personalised, retaining their individual idiosyncrasies and reflecting the diversity of the actors and their domains. Issues relating to the use of real names of people and organisations would need to be borne in mind and dealt with sensibly and sensitively.

The idea of using experts to provide analyses of impact assessment studies was endorsed as a valuable component of the book, but for ‘studies’ now read ‘stories’. Let the experts draw out the issues, suggested one participant, but leave the solutions hanging. The interest of the book would lie in reading about people’s experiences of impact assessment and what the experts have to say about these experiences.

Moving on

With the original proposed structure of the book still in place by the end of the meeting, but its main content shifting from impact assessment case studies to impact assessment stories, and the work shifting from sourcing case studies from the literature to gathering stories from various actors in different domains in the development world, work on the book in 2003 will begin by taking these changes on board and planning the schedule and approach for gathering the stories. It was felt that as this work got under way, so the shape of the introductory (‘Context’) and

81 Smart tools for evaluating the performance of information products and services

concluding (‘Concourse’) sections of the book, and of the proposed electronic version, would evolve more clearly.

The literature review will continue in 2003 and liaison with the members of the Steering Committee, Expert Group and ACP Representatives Group, through e-mail, consultations and workshops, will intensify as the coordinators seek help in identifying likely candidates to interview for their stories and in selecting experts to analyse the stories. The enthusiasm for the project that was evident at the Amsterdam meeting augurs well for the collaboration that will be needed to put together a book that has the potential to make a valuable contribution to the impact assessment debate.

82

Part 4

Future of LEAP IMPACT: A review of allied initiatives

Smart tools for evaluating the performance of information products and services

4.1

LEAP: Lessons learned, recommendations for next time

Allison Hewlitt Bellanet (PowerPoint presentation)

Past experience

In 1999, there was a Global knowledge partnership conference in Kuala Lumpur, Malaysia, which discussed issues on learning and evaluation. The conclusion there was that learning and evaluation raise a lot of issues and problems that are too complex for any one organisation or individual to solve. Consequently, the decision was made to join together with 56 organisations and individuals who expressed an interest in supporting and promoting learning, especially around the use of ICTs – so it was here that the initiative first begun. How could this be achieved?

We should do a A survey on who baseline study to is doing what include...

A platform for We will need an interaction online workspace to provide...

LEAP Launching Workshop Smart tools for evaluating the performance of information products and services

It was decided that a baseline study should be done as an initial step

This was to be done by:

• Literature review by a consultant gathering all the relevant information – documents, surveys, etc. • The survey was sent out to not only the community at the Kuala Lumpur conference, but to all those interested in learning and evaluation around the use of ICTs. • Creating an expert database of all the names of people who could be contacted if certain questions needed to be answered.

Where would you put the information and how could you interact with people? • Need to provide an online workspace to: - Provide a virtual presence

- Platform for interaction because it would be supported by a mailing list

- Place the expert database where there is access to tools, documents and surveys as part of the workspace • Plan an official launching of the workshop to announce the initiative: to strengthen the sense of community and level of interaction and further face-to-face workshops are excellent ways of bring people together. • Create an ICT evaluation within the context of that community that could be tested and evaluated.

What actually happened?

Baseline study All the tasks were completed under the study:

• Literature review completed by a consultant, there was very little input from the community members. • Survey done was not valuable because the response was low. • The expert database created was based on an unsustainable way of gathering information about the people because there was no mechanism to encourage people to update the database – it therefore quickly became out of date.

Online workspace • There was a shift from a single workspace to multiple workspaces, because other groups wanted to look at learning and evaluation and focus on certain areas like:

- EVALTICA – looked at developing an ICT evaluation framework

- LEAP IMPACT – looked at learning and evaluating information products and service

86 Smart tools for evaluating the performance of information products and services

Official LEAP launch • This launch did not take place.

Testing and validating an ICT Evaluation Framework • Still underway by the EVALTICA group – draft v 5.2 released October 26, 2002.

Recommendations for next time

• Baseline study

- NURTURE the community first

- Work with the community to identify their needs and wants

- Ensure that agents of change will provide support

- Conduct surveys within the community

- Create profiles for all community members, not just the experts • Online Workspace

- Keep it simple

- E-mail based: ensures maximisation of participation

- Full time facilitation is critical

- Face-to-face meetings should be organised on a regular basis (< 18 months)

87 Smart tools for evaluating the performance of information products and services

4.2

Support for online collaboration and communities at Bellanet

Katherine Morrow Program Associate Bellanet International Secretariat (PowerPoint presentation)

Bellanet

• International initiative supporting collaboration in the development community since 1995. • Helps development partners to use ICTs (mainly Internet) to achieve their goals. • Core programme and services:

- Access and training

- Dialogue: Bellanet provides advice and support on how to effectively use the Web- and e-mail-based tools for group dialogues and efforts towards the sharing of information

- Open development: The main aim is to ensure ownership of information and technology is in the hands of the development community. This is done through promoting open source software development; open technical standards for information exchange among development partners; open content through copyright free access – encouraging development agencies to release their content free of copyrights and making it available in digital format

- Learning and Knowledge management (KM): Bellanet organises workshops, supports the development of organisational KM strategies and plays a lead role in nurturing and participating in a Community of practice which shares its expertise on KM • Based in Ottawa, Canada, a Secretariat of IDRC. Bellanet is supported by IDRC, CIDA, the Swedish International Development Cooperation Agency, the Danish International Development Assistance, and the Swiss Agency for Development and Cooperation.

Timeline

• In 1996, Bellanet hosted mailing lists to:

- To push the technology

88 Smart tools for evaluating the performance of information products and services

- Include low-bandwidth Internet users

- List hosting; and strategic and facilitation advice • In 1999, workspaces emerged as a response to demand:

- online space to share message archives; participants list; documents; links; e.g., LEAP IMPACT workspace • In 2002, promotion to simplify and expand access using:

- Discussion groups (Dgroups): A partnership (so far) of Bellanet, IICD, Oneworld, UNAIDS (A joint United Nations programme on AIDS/HIV), Institute for Connectivity in the Americas

- Postnuke: An Open Source community platform Dgroups These discussion groups bring a community of users interested in development together through a mailing list as well as shared resources such as documents, links and news and individual profiles, see Figure 1.

Postnuke Postnuke is for groups that have different needs from Dgroups. It is not so much driven by messages, but rather by a news portal environment, which has more features around it. Bellanet customises this product for groups requiring it. Postnuke:

• Is an open source 'Web log/content management system' • Runs on Linux • Provides online space for established communities of practice • Is a vibrant Open Source project

- Active, global community of developers

- Stable, high quality product

- No need to reinvent the wheel

Examples of websites based on Postnuke are:

• Knowledge Management for Development Site – Km4dev (http://open.bellnet.org/km/). • Community-Based Natural Resource Management Asia – CBNRMASIA (http://www.cbnrmasia.org/); see Figure 2. • Sistema de Communicacion – Sipaz (http://www.sipaz.net/.

89 Smart tools for evaluating the performance of information products and services

Figure 1: Example of what a Dgroup looks like

Figure 2: Example of a postnuke at CBNRMASIA

90 Smart tools for evaluating the performance of information products and services

Lessons learned

• Invest (time and money) in facilitation, online skills and community building

- People, not the portal, are at the centre of the project. • Open Source works in principle and in practice. • Start simply, add functionality as needed

- Consult and plan together;

- Add functionality from the ground up. • Choose a manageable, accessible and extensible technology platform. • E-mail is still a very useful tool to use.

References www.bellanet.org open.bellanet.org/km www.dgroups.org www.bellanet.org/leap/impact www.postnuke.org www.cbnrmasia.org www.sipaz.net

'www.itrainonline.org

91 Smart tools for evaluating the performance of information products and services

4.3

An introduction to LEAP IMPACT: A collaborative approach to developing evaluation practice in the information sector

Sarah Cummings KIT

Social actors are continuously, either spontaneously or in a more organised way or both, building relationships with each other to create opportunities for joint learning, increasing their understanding and improving upon current practices. Engel 1997

Over time, this collective learning results in practices that reflect both the pursuit of our enterprises and the attendant social relations. These practices are thus the property of a kind of community created over time by the sustained pursuit of shared enterprise. It makes sense, therefore, to call these kinds of communities communities of practice. Wenger 1999

Since the 1990s, the role of networks or communities made up of non-governmental development organisations (NGDOs) has received increasing attention in the literature. Such networks, including so-called ‘communities of ideas’, ‘communities of practice’ or ‘communities of purpose’ have been used to upgrade the quality of the activities, outputs and impact of these NGDOs; to facilitate a collective learning process; and to contribute to a ‘shifting up’ of development activities to an international audience (Engel 1997). Saunders (2000), in his paper on the RUFDATA1 methodology for evaluation, argues that it is possible to conceptualise evaluation as a series of knowledge-based practices. Central to Saunders’ approach is the notion of practice or ‘meaning creation’ among groups of evaluators. He uses the terminology elaborated by Wenger (1999) and designates these groups as ‘communities of practice’. Saunders attempts to describe the process by which communities of practice develop a shared understanding of evaluation. This learning process has also been described by Patton as ‘individual changes in thinking and behaving that occur among those involved in evaluation as a result of the learning and behaving that occur during the evaluation process’ (1998, cited in Saunders).

1 RUFDATA framework – developed by Murray Saunders of the University of Lancaster, United Kingdom, for the British Council is an acronym for the procedural decisions, which shape evaluation activity in a dispersed office – Reasons and purposes; Uses; Focuses; Data and evidence; Audience;Timing

92 Smart tools for evaluating the performance of information products and services

Against this conceptual background, LEAP IMPACT is a community of practice of information practitioners and evaluation experts who are working together to try to improve the institutional performance of monitoring and evaluation practice related to information services, information products and information projects. LEAP IMPACT has a workspace on the Bellanet website at http://www.bellanet.org with an associated e-mail based discussion list. This technological platform is a highly appropriate medium for knowledge exchange related to evaluation. The potential of such a platform for this subject area has already been noted in the literature. As McConnell comments:

In such a new and challenging field as this, it is even more important that those working, or those interested in working, on information impact are able to communicate with colleagues, exchange experiences, ask advice and share findings. There are various ways of doing this, although a website would provide one useful form. (McConnell 2000)

The beginning

This joint LEAP IMPACT initiative, which started in May 2001, aims to improve the institutional performance of monitoring and evaluation practice related to information services, products and projects. It has the LEAP prefix because it has an online workspace on the LEAP section of the Bellanet website. LEAP stands for the Learning Evaluation Action Programme, which was originally set up as part of the Global Knowledge Partnership. LEAP concentrates on the information and communication technologies (ICTs) field and the communications’ media, primarily the Internet but also other media such as rural radio. It is very much underpinned by the knowledge management approach, aiming to stimulate organisational learning on evaluation and monitoring practice.

The IMPACT acronym breaks down, in a rather forced way, to Information Management, Performance and Communication Technologies. The name is, however, in some ways misleading because it gives the erroneous impression that the community is only concerned with impact assessment. Indeed, it considers the whole spectrum of evaluation, including performance evaluation and impact assessment. On the positive side, the IMPACT acronym does implicitly acknowledge the intellectual debt to the groundbreaking ‘Assessing the impact of information on decision-making’ research project, more commonly known as the Impact research project, which was supported by IDRC during the 1992–2000 period. This IDRC research project has been responsible for many new approaches to the measurement of impact in the information field (Horton 2000).

IMPACT differs from the other communities of practice on LEAP in that: • It is specifically concerned with the evaluation of information services, information projects and information products, as well as ICTs. • It is not focussed on the Internet alone, although there will be some overlap in terms of approaches/individuals with the other communities. • It is open to all those concerned with the evaluation and monitoring of information.

93 Smart tools for evaluating the performance of information products and services

Like the other LEAP communities, EVALTICA, PANTLEG, and GKLEAP, the workspace for the IMPACT community allows the group to exchange messages, post documents of interest, and share information about forthcoming events and news items. It, too, is strongly influenced by the knowledge sharing philosophy.

The ‘strategic alliance’ and the members LEAP – IMPACT is a joint initiative of: • CTA • IICD • Bellanet • GTZ • ISNAR • KIT

In this sense, as well as being a community of practice, IMPACT is also a more or less informal strategic alliance. For the institutional partners, the initiative is motivated by the fact that evaluation is becoming increasingly important as a tool for internal reporting, capacity building and external legitimisation. Without the commitment of the institutional partners who are able to commit resources, particularly staff time, it would be difficult for the community to be so active. Here, special acknowledgement should be made of Bellanet for the workspace it provides on the Website, and of CTA, IICD and KIT for their considerable support to the initiative and, in particular, the funding of face-to-face meetings.

The LEAP IMPACT community currently comprises some 106 members, located in both South and North. Members are largely either information practitioners or experts and researchers in the field of information. They are to be found in a wide rage of institutional settings, including NGDOs, universities, international organisations, and research organisations. A number of the experts have considerable experience with the evaluation of information, including some key participants from the IDRC Impact research project and from UNESCO’s work in this area. The intellectual heritage from the Impact research project can, for example, be seen in papers produced by some of these experts (see Kanyunyuzi-Asaba 2002 and Mchombu 2002). Other experts come from the field of agricultural research, rather than the information community, and are conversant with the wider evaluation field (see Mbabu 2002). The practitioners include persons who are charged with monitoring and evaluation in their own organisations while, for the others, evaluation is not their main activity although they carry out occasional evaluations as part of their normal work.

Joint activities

Since the beginning in 2001, LEAP IMPACT has been active in a number of areas:

• Face-to-face meetings in workshops • Interaction on the workspace • Most recently, work on the smart tools

94 Smart tools for evaluating the performance of information products and services

Two workshops have been key events in the life of this community: the Technical Consultation on ‘Assessing the performance and impact of agricultural information products and services’ which was held in Bonn, Germany, in October 2001; and this ‘Smart tools for evaluating the performance of information products and services: a writing and validation workshop’. These workshops will be referred to throughout the rest of this text as the Bonn and Amsterdam workshops, respectively. These face-to-face meetings have been crucial in keeping the momentum of the community going because they complement and reinforce the interaction via the Website and the e-mail discussion list. The workshops have been the occasions when the most work has been done in developing the community’s ‘practical consciousness’ and in developing concrete outputs such as papers. Further details of the Bonn workshop can be found in the summary report (CTA 2001) and the proceedings (CTA 2002). This current workshop is providing the impetus for the development of the ‘smart tools’, which are currently being elaborated by the community to help information practitioners self-evaluate their own information services.

The e-mail discussion list and the workspace itself have both seen considerable activity. This has included messages, an online newsletter, links to useful new resources on other websites, and posting of relevant documents. There have also been two e-conferences:

• The first one in September 2001, prior to the Bonn workshop which supported its deliberations (Cummings 2002a). • The second one in the summer of 2002 to examine and review the CTA/ISNAR manual (Mook 2001).

There is also a bibliography available on the website

The challenges in evaluation

The LEAP IMPACT community is concerned with improving evaluation practice and developing common understanding. In the two years of its existence, it has come to recognize, and even address, some of the challenges facing evaluators, particularly those in a resource poor setting. Challenges which have been identified by the LEAP IMPACT community include:

• The need for manuals and tools. • Difficulties in comprehending the terminology and concepts. • The confusing number of frameworks and models. • The diverse and dispersed literature. • The need for a paradigm shift in the development field away from control evaluation to self- evaluation for learning.

Manuals and tools Many information practitioners and managers are being expected to evaluate their information services. The impetus for this is either externally or internally motivated. Whatever the initial impetus, this is leading more and more managers to undertake self-evaluations without the

95 Smart tools for evaluating the performance of information products and services

necessary background in evaluation. In resource poor settings where information managers may not have access to the diverse literature to help them on their way, they will be thrown in at the deep end when undertaking evaluations. Even in resource rich settings where information managers have access to the literature, time constraints will also hamper understanding. Where a manager is hampered by both time and resource constraints, they will have nowhere to turn. To overcome these problems, it was felt that a practical ‘how to’ manual should be developed (CTA 2002). This need was also identified at the IDRC London meeting in 1999, which marked the close of the Impact research project:

A practical ‘how to’ manual that could be used by all impact study planners and other key players should be produced. Horton 2000

To help meet this need, the IMPACT community has worked on the revision of the joint CTA/ISNAR manual produced by Byron Mook (2001). In addition, based on the manual, the IMPACT community is currently going a step further to provide a series of easy-to-use tools, which will assist practitioners to undertake evaluation, the so-called ‘smart tools’. These two activities address the frequently expressed need for practical tools and ‘smart practices’ (CTA 2001).

Terminology and concepts Evaluation concepts and terminology in the context of information for development are not clearly defined. This represents a clear challenge to an information practitioner who is trying to comprehend them. As Khadar has noted:

The multiplicity of concepts and terms currently employed by various specialists and consultants constitutes a serious constraint to the development of a coherent body of evaluation literature. Key evaluation terms have more than one meaning or more than one terminology applying to the concept. Various writers recognize this problem (Noel Boissiere, Meadow and Yuan, McConnell, etc.) Khadar 2001

For information practitioners, there appear to be a number of specific problems related to the terminology. Firstly, the multiplicity of terms and concepts makes it difficult to understand. Secondly, this is further confused by the fact that these terms are often used in different ways. Thirdly, little has been done to develop definitions of these terms and concepts that apply specifically to information projects, products and services.

IMPACT has made a contribution to overcoming these problems by clarifying concepts and terminology in the context of information for development through discussion and through the production of relevant papers such as the one by Khadar cited above. In addition, the need for a glossary was noted at the Bonn meeting:

A glossary of terms should be developed which can be added to (ideally on the LEAP IMPACT website) so that it can be used as a common source for future reference. CTA 2001

As a result of this demand, there will be a glossary of evaluation terms, informed by an information perspective, in the forthcoming ‘smart tools’. It is envisaged that these will become standard definitions for information practitioners ensuring, as far as possible, that there is common understanding of the language used in evaluation.

96 Smart tools for evaluating the performance of information products and services

Frameworks For many practitioners, the wide number of frameworks is confusing. There is a need for an inventory of frameworks and methods indicating the relative strengths and weaknesses of the various tools. The Bonn workshop endorsed this approach (CTA 2001). In addition to the inventory, the workshop argued that more work needs to be carried out to determine if a generic framework would be appropriate or if a specific evaluation framework is needed. The possibility of standardising the existing frameworks by discipline should also be explored. One of the recommendations of the Bonn workshop was that:

The content of the evaluation frameworks is so rich; different frameworks analyse different conditions. The possibility of using a road map to guide practitioners in defining the goals of the evaluation should be explored instead of using a framework.

To this end, an evaluation road map has been presented to the Amsterdam workshop. This ‘road map’ is a guide to understanding other frameworks and, notably, makes a distinction between performance evaluation and impact assessment, indicating the institutional area in which these overlap.

Literature The literature on evaluation and impact assessment of information is diverse and dispersed. As the 1999 London meeting identified:

An inventory should be made of other efforts of impact assessment to promote cross-fertilization of methodologies and results from all disciplines and sectors, not just the information sector. Horton 2000

McConnell also considered that:

There is a risk that this body of knowledge might be lost to those who follow. It would be a valuable achievement if the complete documentary record could be identified, copies sought out, a collection established at a designated point(s), and an annotated bibliography published. McConnell 2000

The Bonn workshop endorsed this need: ‘The inventory on evaluation frameworks presented at the workshop should be revisited, reviewed and additional frameworks added to it', (CTA 2001). Although IMPACT has complied an inventory of frameworks and methodologies (Cummings 2002c), further work needs to be done with respect to documenting the literature on the evaluation of information, particularly different case studies. The CTA-sponsored inventory of frameworks is also available as the ‘Bibliography’ on the IMPACT workspace.

Paradigm shift Traditional thinking about evaluation is focused on evaluation for reasons of ‘control’, generally motivated by external agencies, particularly funders. As a result, information practitioners and other professionals are often fearful of evaluations, distrusting the motives. For this reason, LEAP IMPACT is trying to concentrate on self-evaluation rather than control evaluation, representing a paradigm shift in the field of evaluation. Information managers who are prepared for self-evaluation will also have the skills to be more proactive if they later become the object of a control evaluation.

97 Smart tools for evaluating the performance of information products and services

Concluding remarks

This short introduction to LEAP IMPACT demonstrates how its members are working, both individually and together, to improve evaluation practice. Through the exchange of experiences and approaches, and taking into account previous and other continuing work in this area, it is predicted that LEAP IMPACT may well be able to meet some of the challenges facing evaluation in the information sector. To do this, however, continuing collaboration and commitment will be necessary.

References CTA. 2001. Assessing the performance and impact of agricultural information products and services: summary report and recommendations of a technical consultation. Bonn, Germany, 9– 12 October 2001. Wageningen, Technical Centre for Agricultural and Rural Cooperation, 34pp.

CTA. 2002. Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) Wageningen, Technical Centre for Agricultural and Rural Cooperation (CTA), 169pp.

Cummings, S. 2002a. An e-consultation on ‘Assessing the performance and impact of agricultural information products and services’. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) p.23–30

Cummings, S. 2002b. Conceptual frameworks and methodologies used for evaluating agricultural information products and services. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) p.56–67

Cummings, S. 2002c. Inventory: conceptual frameworks and methodologies used for evaluating agricultural information products and services. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) p. 130–156

Engel, P.G.H. 1997. The social organisation of innovation: a focus on stakeholder interaction. Amsterdam, KIT/CTA/STOAS. 238pp.

Horton, F.W. Jr. 2000. Defining and assessing the impact of information on development: building research and action agendas. Ottawa; International Development Research Council; The Hague, International Federation for Information and Documentation, 136pp.

98 Smart tools for evaluating the performance of information products and services

Khadar, I. 2002. Evaluation concepts and terminology: a discussion note. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) p. 51–55.

Kanyunyuzi-Asaba, J.F. 2002. Connectivity in Africa: Use, benefits and constraints of electronic communication. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9–12 October 2001. (CTA Working paper 8027) p. 78– 88.

Mbabu, A. (2002) Principles of evaluation. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9-12 October 2001. (CTA Working paper 8027) p. 41-50

McConnell, P. 2000. Building on IDRC’s Research Program on ‘Assessing the impact of information on decision-making’: a metasynthesis. In: Defining and assessing the impact of information on development: building research and action agendas (Edited by F.W. Horton Jr). Ottawa; International Development Research Council; The Hague, International Federation for Information and Documentation, p. 73–97

Mchombu, K. 2002. Key issues on the characteristics of agricultural information. In: Technical consultation on assessing the performance and impact of agricultural information products and services. Proceedings of a CTA/IICD/LEAP IMPACT Technical Consultation, Bonn, Germany, 9-12 October 2001. (CTA Working paper 8027) p. 35–40

Mook, B. 2001. Evaluating information: a letter to a project manager (CTA Working Document Number 8025; 53pp.)

Patton, M.Q. 1998. Discovering process use. Evaluation 4(2) 225–233

Saunders, M. 2000 Beginning an evaluation with RUFDATA: Theorizing a practical approach to evaluation planning. Evaluation 6(1) 7–21

Wenger, E. 1999. Communities of practice, learning means and identity. Cambridge: Cambridge University Press

99 Smart tools for evaluating the performance of information products and services

4.4

The FAO resource kit

Byron Mook ISNAR

What is the resource kit (RK)?

RK is a series of training modules based on selected topics in information management (IM). The audience will be IM policy-makers, managers, and professionals.

Each module will consist of 'units' and 'lessons'. An individual using the RK will be able to define a 'personal learning path' that will allow him/her to pick-and-choose among various components.

The goal will be self-study. There will be no teachers, required examinations, or certificates. Each lesson will contain a concluding quiz that will allow the participant to evaluate his/her progress. Completion of a lesson will take no more than 25 minutes. The kit will be produced in Arabic, Chinese, English, French and Spanish.

Technology

The RK will be published both on CD-ROM and on the World Wide Web (WWW). It will not contain audio or video files and all graphics will be low-resolution. It will be designed to run well on a Pentium I with an 800x600 graphics capability.

Cost

The RK will be distributed free.

Structure

The RK will consist of four modules at the beginning, though more may be developed later:

• Management of Electronic Documents and Images. • Management of Spatial and Statistical Data. • Community Building and Electronic Networking.

100 Smart tools for evaluating the performance of information products and services

• Investing in Agricultural Information: Issues and Options.

Development

The RK is an FAO/WAICENT initiative, though most content development is being done by a series of institutional partners.

For Module D, the lead partner is ISNAR, with active support from CTA. Other collaborators include Wageningen University and Research Centre, Eductra (an Italian instructional design company), as well as independent consultants.

Content of module D

A curriculum for Module D was developed at an international workshop hosted by CTA in October 2002. Six units are planned:

• The Enabling Environment for Improved IM • Strategy and Planning for IM • Inputs: The Acquisition and Mobilisation of Information • Books, Journals, Databases, Libraries, Consortia, WWW • Outputs: The Uses of Information • Publishing, E-Learning, Distance Education • New Organizational Structures for New IM • Impact Assessment

It is within this module that LEAP IMPACT is being invited to contribute material to.

First step

There will be a 'proof of concept', using the unit on Impact Assessment as an example. The goal will be to see if materials on a complex management subject can be created, formatted, and presented, economically and persuasively, in a self-study format requiring no more than two hours of participant time.

The primary content input will be the 'Letter to a Project Manager', developed jointly by CTA and ISNAR.

101

Part 5

Future direction of the toolkit, resource book and LEAP IMPACT

Smart tools for evaluating the performance of information products and services

5.1

Future direction of the toolkit, resource book and LEAP IMPACT

From the proceeding sections, it can be seen that the workshop provided an excellent forum to debate, discuss and formulate strategies and future follow-up activities to develop the toolkit, gather material for the resource book and promote LEAP IMPACT. This section presents the main decisions taken and future activities (planned and envisaged) for each initiative.

Toolkit

• Focus • Structure: the toolkit will have six main sections:

- Glossary

- Introductory module

- Preparatory tools

- Process tools

- Activity tools

- Case studies – participants felt strongly that the methods used to assess performance should be limited to only those tools covered in the toolkit • Tools

- The 'terms and concepts' tool should be divided into two tools: 'terms' and 'concepts'.

- The title the tools containing the logframe and logic model should be changed to ‘planning your evaluation’ and ‘logframe’ should be a sub-title.

- The library tool will need sub-tools for specific services, possibly making extra tools for different services, linked to different choices/clients. It will be further developed and refined for possible testing at KIT Library and IIAV.

- Coordinating role: Ms Gast will determine how process tools such as data analysis, data collection, face-to-face meetings, focus groups, interviews, questionnaire design, stories/case studies, and other methods fit together and will change the sequencing accordingly. The survey monkey should also be included in this section.

105 Smart tools for evaluating the performance of information products and services

• Immediate follow-up activities after the workshop

- Mrs Batjes is to provide an update of the guidelines for writing the tools as soon as possible after the workshop. Special attention should be paid to the fact that many of the tools are aimed at beginners/practitioners, not experts. There should be a ‘Tips’ section (in boxes) relevant for all tools. The tools should be cross-linked to show continuity and connectedness. Key terms should appear at the bottom of each tool.

- Writers are expected to follow closely the guidelines. • Role of the Editorial Committee (see p 69 for list of members): When the tools are completed, the tools will be standardised and amended by the Editorial Committee. Then there will be a process of testing. At this stage, external individuals need to be involved to test them. The Editorial Committee will take the lead in production, testing and publication. • Validation of the toolkit: The toolkit will be promoted and disseminated to various target groups for testing by LEAP IMPACT members. Further, CTA, KIT, IICD will actively approach partners from both North and South to test the tools. • Publication and dissemination of the toolkit: In the first instance, the toolkit should be made available as a working document to all the participants. It should also be posted on the LEAP IMPACT workspace. It should then be later distributed as hard copy and on a CD-ROM. Time schedule:

- Deadline for the first draft of tools is 31 January 2003.

- The tools should be finalised by the Editorial Committee by 30 April. On 1 May 2003, they will go to testing.

- Testing should be completed by 1 July, 2003.

- Editorial Committee to make further amendments to the tools based on feedback received from testing.

- Publication on CD-ROM/Web 2003.

- Toolkit should be ready for printing by December 2003.

- Hardcopies of the toolkit should be available in 2004.

• Future development: - The coverage of the tools should be expanded to include the following: ! Radio: Ms op de Coul will approach Oneworld Radio/AMARC, while Ms Hewlitt will contact BBC World Service in Afghanistan to discuss the possibility of developing such a tool; CTA can also test the radio programmes it supports for farmers in the ACP; ! Multi-media video is another possibility; and ! CD-ROMs (videos are not included) used in information services ! The possibility of evaluating networks will be explored.

- Meetings: ! There should be other workshops or regional meetings held (if possible), to further

106 Smart tools for evaluating the performance of information products and services

the work currently being carried out. ! It is envisaged that after the toolkit has been tried and tested, an attempt will be made to address the question of impact of information products and services. It was put forward that this work should be done within the context of another workshop setting.

Resource book

The book will have three broad sections – ‘Context’, ‘Controversy’ and ‘Concourse’ as well as annexes

• Context – the history of impact assessment trends and approaches, the imperative to demonstrate programme impact, and whether specific approaches are needed to assess the impact of information. • Controversy – the bulk of the book, revolving around impact stories and studies, and analyses of these by experts, drawing out the main impact assessment issue/message; this section would also make up the bulk of the proposed e-format book which would evolve after the print format book was published. • Concourse – bringing together the strands, highlighting the loose ends, commenting on where current thinking seems to stand, linking back to the contextual points and providing thoughts on the way forward. • Annexes (possibly including a description of the e-version, a bibliography, CTA’s evaluation programme and products, a glossary of terms, an inventory of current research, useful contacts, something on how the book had been put together). Other key decisions taken at the workshop, shaping book content include:

• Clearly defining the book’s target audiences. • Defining terms and concepts:

- used in the book based on the different perspectives of the different actors. Need a representative from a donor agency to sit on the Steering committee so as to be able to capture that perspective;

- key words identified – information v. knowledge v. communication; impact, evaluation, learning and intervention.: Include a someone from donor agency ; • Aiming for a book which both opens up and looks at differing views on impact assessment; • Putting the donor/recipient relationship at the heart if the discussion; • Taking care in sourcing, selecting and reproducing appropriate stories, and in setting up the analyses of these stories and studies; • Using impact stories as well as impact case studies, with the actors involved at all levels and in all domains as the entry point (the stories are what someone relates about their personal involvement in and views on an impact study);

107 Smart tools for evaluating the performance of information products and services

• Setting up a Web ‘Dgroup’ which provides a discussion platform for the book, and linking it with to other relevant sites such as LEAP IMPACT, IICD, MandE e-newsletter.

LEAP IMPACT

The main concern here was how to strengthen LEAP IMPACT as a community. Areas identified for action included:

• Workspace and list - Address the issue of access – how can ALL community members effectively participate in LEAP IMPACT.

- Develop a directory of members for download by asking all members to contribute information on the resources available to them.

- Structure the workspace according to activities/initiatives pursued.

- Conduct southern facilitation in both French/English. • Process

- As a group or community, identify LEAP IMPACT’s objectives, modalities of membership and community identity.

- Find ways to more effectively pull out real life experiences of the members of the community (i.e. interviews, surveys).

- Operationalise LEAP IMPACT activity by addressing practical issues in development.

- Promote and use LEAP IMPACT as a pool of expertise on evaluation of information. • Content: The LEAP IMPACT community remains committed to addressing issues relating to performance and impact assessment, however, during the workshop participants felt that there were other areas that the community could be making a valuable contribution. The list below includes both the current and new areas of concern:

- Evaluation

- ECD: Explore LEAP IMPACT's contribution to Evaluation Capacity Development

- Food security

- Impact assessment: continue to focus on impact (assessment) and on change

- Module D and the Resource book: LEAP IMPACT should be the reviewing mechanism for FAO's Module 'D' and the Resource book etc.;

- Toolkit: continue to develop the toolkit using the LEAP IMPACT platform

- Water: The LEAP IMPACT community is being asked to prepare papers/ posters for the upcoming 'Sixth Water Information Summit' between 9–12 September 2003, in The Netherlands (visit the Website at http:www.waterweb.org). The contributions should focus on information quality and quality assurance.

108 Smart tools for evaluating the performance of information products and services

• Facilitation: This is critical and resources are necessary to ensure opportunities for face-to-face interaction. Further given the limited funding available to run and maintain the LEAP IMPACT platform, the participants suggested that a team of members should be responsible for managing it on a rotation basis. In the first instance, Sarah Cummings and Christine Kalume have agreed to make time available to do this.

109

Annexes Smart tools for evaluating the performance of information products and services

After action review (AAR)

During the closing plenary session Ms Hewlitt introduced AAR as a way in which to evaluate the workshop. The advantage of the AAR was that it was simple to implement, providing an excellent opportunity for the participants to capture the lessons learned over the course of the workshop, which can be used for future workshops. Three questions were posed:

• What did we set out to achieve? • What actually happened? What did not? • What would you do differently next time?

- Specific actionable recommendations

What did we set out to achieve? - key concepts;

- validation of the draft tools;

- strategy for future development;

- a selection of teams to carry the work forward;

- small Editorial Committee;

- LEAP conclusion of the future. Positive – Some quotes:

• ‘Panel discussion was very lively’ • ‘Hard work before hand saved time in faffing around.’ • ‘It wasn’t possible to get the drafts in advance.’ • ‘There was support for newcomers.’ • ‘The smart tools are going to be great for capacity building.’ • ‘Lots of people have gone the last mile to make this meeting a success.’ • ‘Positive to me. Tough and painful process, happy with the result. For me, it has become complete. It was hard work and we really achieved something.’

Negative – Some quotes:

• ‘For new people, particularly to the satellite meeting on Thursday morning, it was not very clear what they were supposed to be doing.’ • ‘Speakers [on this day] were not well enough aware of what was expected of them. They needed a clearer outline of what they were doing.’

112 Smart tools for evaluating the performance of information products and services

• ‘There should have been an official invitation for the morning session with better communication.’ • ‘It was sometimes difficult to follow the facilitation. The facilitator should have used more visuals, rather than PowerPoint.’ • ‘It was really three different meetings with the book and the open day – this did create problems.’ • ‘It did allow us to kill several birds with one stone.’ • ‘We should have finished the meeting earlier.’ • ‘It was more cost effective to hold other events at the same time.’ • ‘The newsletter was great and really informative, but not everyone was aware that it had been produced.’

113 Smart tools for evaluating the performance of information products and services

Workshop programme

Day one: 19 November

09.00 – 10.00 Registration and coffee 10.00 – 11.30 Opening plenary session Welcome addresses by Professor Modupe Akande (Chairperson); Mr Hans van Hartevelt, Head of KIT-ILS; Ms Sarah Cummings, KIT

Workshop programme Lisette Gast

Evaluation: a discussion paper Ibrahim Khadar

PLENARY SESSION

Chairperson: Kingo Mchombu 11.30 – 11.50 Production, validation, strategy and status Karen Batjes

11.50 – 12.15 Tips John Belt and Margo Kooijman)

12.15 – 12.30 Plenary discussion

Chairperson: Daniel Thieba 14.00 – 14.30 Indicators: a discussion paper (Andreas Springer-Heinze)

14.30 – 14.45 Division into two working groups

Examining the ‘preparatory work’ and the ‘process’ toolkits. Two Working Groups formed: Note: the first person here is responsible for writing the tool: the second person will have a supporting/discussant role.

Preparatory work – WG 1 • Concepts/terminology (Margo Kooijman/Boubacar Diaw) • Logical framework/methods (Byron Mook/Adiel Mbabu) • SWOT analysis (Lola Visser-Mabogunje)

114 Smart tools for evaluating the performance of information products and services

Process – WG 2 • TOR/ questionnaire design (Sarah Cummings/Allison Hewllit) • Data collection/data analysis (Lisette Gast/Kingo Mchombu) • Indicators (Andreas Springer-Heinze/Christine Kalume) • Writing the report (Ibrahim Khadar/Karen Batjes)

17.15 Day One Wrap-up (Chris Addison) Brief review of the day.

Day two: 20 November

09.00 Review of draft toolkits within the context of these activities: • Seminars (Modupe Akande) • Websites (Lisette Gast) • Q&A services (Simon Osei) • Newsletters (Bruce Lauckner) • Libraries (Herman van Dijk)

11.30 Case studies • Seminars (Lola Visser–Mabogunje) • Websites (Maartje op de Coul) • Q&A services (Joel Sam) • Newsletters (Christine Kalume) • Libraries

14.00 Revision and validation of toolkits(based on case studies):

• Seminars (Akande/Visser) • Websites (Gast/Op de Coul) • Q&A services (Osei/ Sam) • Newsletters (Lauckner/Kalume) • Libraries (Van Dijk)

16.00 Results of the first two days (Chris Addison)

115 Smart tools for evaluating the performance of information products and services

17.30 End of session

Day three: 21 November

10.00 – 12.00 LEAP IMPACT Open Meeting (The future of LEAP IMPACT: A review of allied initiatives and a discussion)

Chairperson: Bruce Lauckner

• The CTA book on ‘Assessing the impact of information on development’ (Kay Sayce) (10 minutes) • The FAO resource kit (Byron Mook) (10 minutes) • A review of LEAP (Allison Hewlitt) (10 Minutes) • Water Information Summit 2003 and evaluation (Ingeborg Krukkert, IRC) (5-10 minutes) • Questions • A discussion of the future of LEAP IMPACT

14.00 – 14.40 LEAP IMPACT/VIIO Public Seminar

Chairperson: Ibrahim Khadar, CTA

Welcome

On behalf of KIT (Dr Jan Donner, President) On behalf of CTA (Mr Carl Greenidge, Director) On behalf of IICD (Ms Lisette Gast on behalf of Mr Jacques Stienen, Director)

LEAP IMPACT An introduction to LEAP IMPACT and VIIO (Sarah Cummings, KIT) (10 minutes) Support for virtual communities: an overview (Katherine Morrow, Bellanet) (10 minutes) The evaluation manual and related initiatives (Byron Mook, ISNAR) (10 minutes)

Plenary discussion: 10 minutes

16.00 Panel discussion: Is evaluation of information really smart? OR Is evaluation of information in your organisation really smart?

Panel members Dr Paul Engel, Director of ECDPM Mr Carl Greenidge, Director of CTA

116 Smart tools for evaluating the performance of information products and services

Ms Christine Kalume, Healthlink Worldwide Dr Adiel Mbabu, Association for Strengthening Agricultural Research in Eastern and Central Africa Mr Michael Polman, Director of Antenna Foundation

17.00 Reception

Day four: 22 November

09.00 – 12.00 Parallel sessions • Future development of the toolkits (Karen Batjes/Sarah Cummings) and Evaluation of the workshop: an After Action Review (Allison Hewllit) • CTA Impact resource book (Ibrahim Khadar/Kay Sayce)

12.00 – 13.00 Plenary: Closure of the meeting

117 Smart tools for evaluating the performance of information products and services

List of participants

Guest participants

Donner, Jan (Dr) Polman, Michael President Director Royal Tropical Institute (KIT) Antenna Foundation PO Box 95001 Postbus 1513 1090 HA Amsterdam 6501 BM Nijmegen The Netherlands Tel. +31 (0)24 3603407 Tel.: +31 20 5688 298 Fax +31 (0)24 6752848 e-mail: [email protected] e-mail: [email protected]

Engel, Paul (Dr) Director Workshop participants European Centre for Development Policy Akande, Modupe (Professor) Management Centre (ECDPM) Institute of Agricultural Research and Onze Lievevrouweplein 21 Training (I.A.R&T) 6211 HE Maastricht Obafemi Awolowo University The Netherlands P.M.B. 5029 Tel.: +31 43 350 29 00 Moor Plantation Fax: +31 43 350 29 02 Ibadan e-mail: [email protected] Nigeria Tel.: +234 2 8107322 Greenidge, Carl B. (Mr) Fax: +234 2 8107322 Director e-mail: [email protected] Technical Centre for Agricultural and Rural [email protected] Cooperation (CTA) Postbus 380 Coul, Maartje op de 6700 AJ Wageningen New Media Evaluation Manager The Netherlands OneWorld International Tel.: +31 317 467 130 Floor 17 Fax: +31 317 460 067 89 Albert Embankment e-mail: [email protected] London SE1 7TP United Kingdom Hartevelt, Hans van Tel: +44 (0)20 7735 2100 Head Information Services Fax: +44 (0)20 7840 0798 KIT Information Services Royal Tropical Institute (KIT) Cummings, Sarah P.O. Box 95001 Information 1090 HA Amsterdam Managerhttp://www.kit.nl/http://www.kit.nl/ The Netherlands KIT Information Services Tel.: +31-20-5688-686 Royal Tropical Institute (KIT) e-mail: [email protected] PO Box 95001 1090 HA Amsterdam The Netherlands Tel.: +31 20 5688 298 e-mail: [email protected]

118 Smart tools for evaluating the performance of information products and services

Diaw, Boubacar Kalume, Christine Association ouest et centre-africaine de Information Production and Management Team recherche sur les systèmes de production et de Leader gestion des ressources naturelles (AOCA/RSP- Healthlink Worldwide GRN) Cityside, BP 186Sikasso 40 Adler Street Mali London E1 1EE Fax : +223 620 349 United Kingdom e-mail :[email protected] Tel.: +44 20 75391588 Fax: +44 20 75391580 Dyk van, Herman e-mail: [email protected] Assistant Director Library and Information Services Khadar, Ibrahim (Dr) University of the Free State (UFS) Acting Manager P.O. Box 301 Planning and Corporate Services 9300 Bloemfontein Technical Centre for Agricultural and South Africa Rural Cooperation (CTA) Tel.: +27 51 4012230 Postbus 380 Fax: +27 51 4306423 6700 AJ Wageningen e-mail: [email protected] The Netherlands Tel.: +31 317 467 159 Gast, Lisette Fax: +31 317 460 067 Policy Officer (Monitoring and Evaluation) e-mail: [email protected] International Institute for Communication and Development (IICD) Kooijman, Margo Juffrouw Idastraat 11 Manager Front Office P.O. Box 11586 KIT Information Services 2502 AN The Hague Royal Tropical Institute (KIT) The Netherlands P.O. Box 95001 Tel.: +31 70 311 73 11 1090 HA Amsterdam Fax:+31 70 311 73 22 The Netherlands e-mail: [email protected] Tel.: +31-20-5688-686 e-mail: [email protected] Hardon, Anne Information Manager Kulis, Ivan KIT Information Services European Centre for Development Policy Royal Tropical Institute (KIT) Management (ECDPM) PO Box 95001 Onze Lievevrouweplein 21 1090 HA Amsterdam 6211 HE Maastricht The Netherlands The Netherlands Tel.: +31 20 5688 298 Tel.: +31 43 350 29 00 e-mail: [email protected] Fax: +31 43 350 29 02 e-mail: [email protected] (8) Hewlitt, Alison Bellanet 250 Albert St., P.O. Box 8500 K1G 3H9, Ottawa Canada Tel.: +1 613 236 6163 ext 2393 Fax: +1 613 236 7230 e-mail: [email protected]

119 Smart tools for evaluating the performance of information products and services

Lauckner, Bruce Osei, Simon K. Manager, Research and Development (Ag.) Assistant Project Co-ordinator Biometrician Ghana Agricultural Information Network Caribbean Agricultural Research and System (GAINS) Development Institute (CARDI) Institute for Scientific and Technical UWI Campus Information (INSTI) P.O. Bag 212 PO Box M. 32 St Augustine Accra, Trinidad and Tobago Ghana Tel.: +1 868 6451205/6/7 Tel: +233 21 778808 +1 868 6458120/1 Fax: +233 21 779809 Fax: +1 868 6451208 e-mail: [email protected] e-mail: [email protected] Sam, Joel Mbabu, Adiel Nkonge (Dr) Project Co-ordinator Technical Officer-Planning Ghana Agricultural Information Network Association for Strengthening Agricultural System (GAINS) Research in Eastern and Central Africa Institute for Scientific and Technical P.O. Box 765 Information (INSTI) Entebbe PO Box M. 32 Uganda Accra, Tel.: +256 41 320212/321885 Ghana Fax: +256 41 321126 Tel: +233 21 778808 e-mail: [email protected] Fax: +233 21 779809 e-mail: [email protected] Mchombu, Kingo (Professor) DIS-University of Namibia Springer-Heinze, Andreas (Dr) PB 13301 Innovation Specialist, Supra-Regional Project on Windhoek Rural Knowledge Systems Namibia Deutsche Gesellschaft für Technische Tel.: +264-61 2063641 Zusammenarbeit (GTZ) Fax: +264-61 2072444 GTZ-OE 4556, Zi. 1441 e-mail: [email protected] Postfach 5180 65726 Eschborn Mook, Byron (Dr) Germany Senior Research Officer Tel.: +49 6196 791441 Information Programmes International Service Fax: +49 6196 797162 for e-mail: [email protected] National Agricultural Research (ISNAR) Postbus 93375 Thieba, Daniel 2509 AJ The Hague 01 BP 6428 Ouagadougou 01 The Netherlands Burkina Faso Tel.: +31 70 3496180 Fax: (226) 34 24 60 Fax: +31 70 3819677 Tel: (226) 34 21 15 e-mail: [email protected] e-mail: [email protected] [email protected]

120 Smart tools for evaluating the performance of information products and services

Visser-Mabogunje, Lola Krukkert, Ingeborg Project Assistant Information specialist Planning and Corporate Service IRC International Water and Sanitation Centre Technical Centre for Agricultural and P.O. Box 2869 Rural Cooperation (CTA) 2601 CW Delft Postbus 380 The Netherlands 6700 AJ Wageningen Tel.: +31-15-219 29 85 The Netherlands Fax: +31-15-219 09 55 Tel.: +31 317 467 142 e-mail: [email protected] Fax: +31 317 460 067 URL: http://www.irc.nl e-mail: [email protected] Morrow, Katherine Program Associate Resource persons Bellanet International Secretariat/IICD Addison, Chris Email: [email protected] Consultant, Communiq Tel: 31 (0)70 311 73 21 e-mail: [email protected] Fax: 31 (0)70 311 73 22

Belt, John Pels, W.J. (Ir) Scientific Officer IRC International Water and Sanitation Centre KIT/Royal Tropical Institute P.O. Box 2869 PO Box 95001 2601 CW Delft 1090 HA Amsterdam The Netherlands The Netherlands Tel: + 31-15-219 2950 Tel: +31 (0)20 568 8489/8234 Fax: +31-15-219 0955 Fax: +31 (0)20 568 8498 Mob: +31-6-19 164 195 e-mail: [email protected] e-mail: [email protected]

Sayce, Kay Editor Sayce Publishing Batjes-Sinclair, Karen Tetric Systems Ltd Rietveldlaan 32 West Hill House 6708 SB Wageningen 6 Swains Lane The Netherlands London N6 6QU Tel.: +31 317 426028 United Kingdom e-mail: [email protected] Tel: +44 020 8348 4110

Fax: +44 020 8348 9014 Participants of the Open Meeting (and e-mail: [email protected] public session) Ballantyne, Peter Zeppenfeldt, Thomas Manager, Knowledge Sharing Datheon Database Solutions bv International Institute for Communication and Agro Business Park 54 Development (IICD) 6708 PW Wageningen Juffrouw Idastraat 11 Phone : 0317 479720 P.O. Box 11586 Fax : 0317 479721 2502 AN The Hague Mobile : 06 4520 4836 The Netherlands Email : [email protected] Tel.: +31 70 311 73 11 URL: www.datheon.com Fax:+31 70 311 73 22 e-mail: [email protected]

121 Smart tools for evaluating the performance of information products and services

Public session only Daane, J.R.V Afternoon 21 November International Centre for development oriented Allmand, Monica Research in Agriculture (ICRA) Head Library and Information Services Mailing address P.O. Box 88 International Service for National Agricultural 6700 AB Wageningen Research (ISNAR) The Netherlands P.O. Box 93375 e-mail [email protected] 2509 AJ The Hague Tel. +31 317 422 938 The Netherlands Fax +31 317 427 046 Tel: +31 (0)70 349 6175 e-mail: [email protected] Fax: +31 (0)70 381 9677 URL: www.icra-edu.org http://www.isnar.org e-mail: [email protected] Dam, Henk van Information Manager Browne, Nigel KIT Information Services Head, Library Royal Tropical Institute (KIT) Institute for Housing and Urban Development PO Box 95001 Studies (IHS) 1090 HA Amsterdam P.O. Box 1935 The Netherlands 3000 BX Rotterdam Tel.: +31 20 5688 573 The Netherlands e-mail: [email protected] Tel: +31 10 4021505 Fax: +31 10 4045671 Douze, Marjet e-mail: [email protected] Hoofd Informatieverzorging/adj directeur URL: http://www.ihs.nl Internationaal Informatiecentrum en Archief voor de Vrouwenbeweging (IIAV) Beelen, Koen Obiplein 4 International Agricultural Centre (IAC) 1094 RB Amsterdam Knowledge Management IAC The Netherlands WUR building 425, room 008 Tel. +31 20 6651318 P.O.Box 88 (Lawickse Allee 11) Fax: +31 20 6655812 6700 AB Wageningen e-mail: [email protected] The Netherlands URL: http://www.iiav.nl Tel:+31 (0)317 495 260 (direct) Tel:+31 (0)6 28260205 (cell phone) Feiertag, Servaas Fax: +31 (0)317 495 395 Foundation Venture Intelligence Africa e-mail: [email protected] (VI@frica) URL: http://www.iac.wageningen-ur.nl Lutmastraat 263-hs 1074 TZ Amsterdam Bos, Albert The Netherlands Treasurer, VIIO Mobile : +31 (0)6 17604246 e-mail: [email protected] E-mail: [email protected] Bruning, Evelijne URL: http://www.viafrica.org External Information Officer SNV Bezuidenhoutseweg 161 2594 AG The Hague The Netherlands e-mail: [email protected]

122 Smart tools for evaluating the performance of information products and services

Ferguson, Julie Scherpenzeel, Hanneke van Programme Officer, Knowledge Sharing Internal Information Officer International Institute for Communication and SNV Development (IICD) Bezuidenhoutseweg 161 Juffrouw Ida Straat 11 2594 AG The Hague PO Box 11586\2502 AN The Hague The Netherlands The Netherlands e-mail: [email protected] Tel: +31 70 311 7311 Fax: +31 70 311 7322 Valk, Minke E-mail: [email protected] Information Manager KIT Information Services Mudde, Huub Royal Tropical Institute (KIT) Euforic PO Box 95001 e-mail : [email protected] 1090 HA Amsterdam The Netherlands Nielsen, Fleming (Dr) Tel.: +31 20 5688 344 Associate Researcher e-mail: [email protected] Technology and Agrarian Development Group Wageningen University Veen, Joke van de Leeuwenborch Building 201 Langegracht 44 Hollandseweg 1 3601 AJ Maarssen 6706 KN Wageingen The Netherlands The Netherlands e-mail: [email protected] Tel: + 31 317 4 82776 e-mail: [email protected] Voogd, Ella de WWW: http://www.sls.wau.nl/tao Senior Programme Officer Cooperation Programme Section Noordzij, Annette Department for Human Resource and Leprosy Information Services Institutional Development Leprastichting NUFFIC PO Box 95005 P.O.Box 29777 1090 HA Amsterdam 2502 LT The Hague The Netherlands The Netherlands tel. +31-20-5950530 Tel: +31(0)70 42 60 172 e-mail: [email protected] e-mail: [email protected]

Pugh, Lin Vonk, Tjalling Programme Manager, International Cooperation International Institute for Communication and IIAV International Information Centre and Development (IICD) Archives for the Women's Movement Juffrouw Idastraat 11 Obiplein 4 P.O. Box 11586 1094 RB Amsterdam 2502 AN The Hague The Netherlands The Netherlands tel +31-20-6651318 Tel.: +31 70 311 73 11 fax +31-20-6655812 Fax:+31 70 311 73 22 e-mail: [email protected] e-mail: [email protected] URL: http://www.iiav.nl

123 Smart tools for evaluating the performance of information products and services

Vries, Tsjebbe de Chair, VIIO Antenna Foundation Postbus 1513 6501 BM Nijmegen [email protected]

Vreede, Verele de Information Officer WASTE Advisers on Urban Environment and Development Nieuwehaven 2012801 CW GOUDA The Netherlands Tel: +31-(0)182-522625 Fax: +31-(0)182-550313 e-mail: [email protected] URL: http://www.waste.nl

Additional participants attending resource book meeting Morning 22 November

Engel, Paul (Dr) Horton, Doug (Dr) Vincent, Rob (Dr)

124 Smart tools for evaluating the performance of information products and services

Acronyms

ACP Africa, Caribbean and Pacific Group of States

ASARECA Association for Strengthening Agricultural Research in Eastern and Central Africa

CARDI Caribbean Agricultural Research and Development Institute

CMA-AOC Conference of Ministers of Agriculture/ West Africa and Central Africa

CIDA Canadian International Development Agency

CORAF Conférence des responsables de recherche agronomique africains

CTA Technical Centre for Agricultural and Rural Cooperation

DAC Development Assistance Committee (OECD)

DANIDA Danish International Development Assistance

DORA Dissemination of Reference Books on Agriculture (CTA)

DFID Department for International Development

ECART European Consortium for Agricultural Research in the Tropics

ECAPAPA Eastern and Central Africa Programme for Agricultural Policy Analysis

ECDPM European Centre for Development Policy Management

ECOWAS Economic Community of West African States

EU European Union

FAO Food and Agriculture Organization of the United Nations

FID International Federation for Information and Documentation

GAINS Ghana Agricultural Information Service

GTZ Gesellschaft für Technische Zusammenarbeit (German Agency for Technical Cooperation, Germany)

ICM information and communication management

ICT information and communication technology

125 Smart tools for evaluating the performance of information products and services

IDRC International Development Research Centre

IDS Institute of Development Studies, University of Sussex

IICD International Institute for Communication and Development

IK indigenous knowledge

IM information management

IMPACT Information Management, Performance and Communication Technologies

ISNAR International Service for National Agricultural Research

ISP Internet service provider

KIT Koninklijk Instituut voor de Tropen (Royal Institute for the Tropics)

KM knowledge management

LEAP Learning and Evaluation Action Program

M&E monitoring and evaluation

MIS market information systems

NAMDEVCO National Agricultural Marketing and Development Corporation (Trinidad and Tobago)

NAS national agricultural system

NDGO non-governmental development organisations

NGO non-governmental organisation

NHRC Navrongo Health Research Centre (Ghana)

OECD Organisation for Economic Cooperation and Development

PF Preliminary Framework

PRAIS Programme for Agricultural Information Services

QAS Question-nd-Answer Service

R&D research and development

REPA Réseau d' expertise en politiques agricoles

126 Smart tools for evaluating the performance of information products and services

RK Resource kit (FAO)

SACCAR Southern Africa Centre for Cooperation in Agricultural and Natural Resources Research and Training

SADC South African Develpoment Community

SDI Selective Dissemination of Information (CTA)

SPS WTO Phytosanitary Regulations

UNESCO United Nations Educational, Scientific and Cultural Organization

WAICENT World Agricultural Information Centre (FAO)

WTO World Trade Organisation

127

Smart tools for evaluating the performance of information products and services

1