GUIDELINES FOR WRITERS (Soph POST-GRADUATE DIPLOMA MODULES)

Total Page:16

File Type:pdf, Size:1020Kb

GUIDELINES FOR WRITERS (Soph POST-GRADUATE DIPLOMA MODULES)

UNIT Selecting Indicators and 4 Collecting Data

Introduction

Welcome to the fourth unit of Monitoring and Evaluation in Health and Development Programmes. The unit covers the process of defining and calculating indicators. We also survey the range of options for data collection. This is the last unit to focus on planning the monitoring or evaluation process.

There are seven study sessions in this unit:

Study Session 1: What are Indicators?

Study Session 2: Selecting Indicators

Study Session 3: Indicators with Metrics

Study Session 4: Characteristics of Good Indicators

Study Session 5: Data Systems

Study Session 6: Selecting Data Types and Ensuring Quality

Study Session 7: Data Collection

In Session 1, we will define indicators in detail. In Session 2, we will guide you through the process of selecting indicators that relate to your objectives, while using the Results Framework. In Session 3, we will examine the calculation of indicators by using metrics, or formulas, for each indicator. We will also discuss the process of operationalising indicators and how this influences the selection of indicators. Session 4 focuses on applying a checklist of characteristics for good indicators. This enables you to critically evaluate the quality of your own indicators. Session 5 focuses on levels of data, e.g. service environment level data, while Session 6 addresses the selection of qualitative and quantitative data and the importance of ensuring that quality data is collected. Session 7 focusses on data collection processes.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 137 Learning outcomes of Unit 4

By the end of this unit, you should be able to:

. Develop appropriate indicators for monitoring and evaluating a programme. . Identify appropriate data to collect for monitoring or evaluating a programme. . Describe methods that can be used to collect monitoring and evaluation data. . Design and use data collection instruments for interviews and focus group discussions. . Discuss steps involved in conducting interviews/focus group discussions.

AssignmentAssignment

ThisThis unit unit will will assist assist you you in in thinking thinking through through the the data data implications implications of of your your evaluationevaluation plan plan in in Assignment Assignment 1. 1.

TheThe unit unit will will also also be be helpful helpful to to you you in in selecting selecting indicators indicators and and collecting collecting data data forfor the the evaluation evaluation of of a a programme programme of of your your choice choice in in Assignment Assignment 2. 2.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 138 Unit 4 – Session1 What are Indicators?

Introduction

In this session we focus on the definition of indicators in the context of health care evaluation. This understanding will help you develop indicators for your own proposed evaluation programme.

A manual by Heywood & Rohde (2002), Using Information for Action, which is aimed at health workers at facility level explains indicators as follows:

“Indicators are the tools that the DHIS [District Health Information Service] uses to convert raw data into useful information and to enable comparison between different facilities. … Indicators are generally defined as ‘variables that help to measure changes, directly or indirectly’. (WHO, 1981)”

In the context of the DHIS, indicators: . Convert raw data into useful information; . Are observable markers of progress towards defined targets; . Are useful to describe the situation and to measure changes over time; . Provide information about a broad range of conditions through a single measure; . Provide a yardstick whereby institutions or teams can compare themselves to others doing similar work.” (Heywood & Rohde, 2002: 55)

Contents

1 Learning outcomes of this session 2 Readings 3 Defining indicators 4 Identifying indicators 5 Session summary 6 References

Timing

This session contains two readings and two tasks. It should take you about an hour to complete.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 139 1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Define an indicator. . Apply your understanding of indicators by identifying examples.

2 READINGS

There are two readings in this session.

Author/s Publication details Bertrand, (1996). Ch III - Methodological Approach: Programme Monitoring. In J. T., Evaluating Family Planning Programs - with Adaptations for Magnani, Reproductive Health. USA: The EVALUATION Project, University of R. J. & North Carolina: 29-39. Rutenber g, N.

Feuerstein, M-T. (1986). Ch 2 - Planning and Organising Resources. In Partners in Evaluation. Evaluating Development and Community Programmes with Participants. London: Macmillan: 23-28.

3 DEFINING INDICATORS

It is always helpful to start by defining a concept in your own words, thus bringing your own knowledge to the fore before reading further. So, start with Task 1.

TASK 1 – WHAT ARE INDICATORS? a) What comes to mind when you hear the word “indicator”? Write down your own definition.

FEEDBACK Defining indicators An indicator informs us of the outcome of the process of intervention, i.e. what outcome has resulted from the intervention. It measures one aspect of a programme. It is a variable, in other words, an indicator is an aspect of the programme that varies or changes according to the specific conditions that

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 140 pertain in, for example, the implementation of a programme. In that it is a variable, an indicator describes what we expect to change as a result of our activities, i.e. of the intervention.

We will now go into more detail on this definition. Firstly, the purpose of indicators is to show what changes have occurred that are related to programme activity. Therefore, an indicator of that change will be something that we reasonably expect to vary. Its value will – or should – change from a given or baseline level at the start of the intervention to another value after the intervention has had time to make its impact. At that point, the variable – or indicator – is calculated again.

Secondly, an indicator is a measurement – i.e. of change. It gives the change a value, in units that are meaningful for the purposes of programme management. Since it measures the change between two points in time – the start of the programme, and the end of the programme – these values can be said to be past units and values and future units and values. In other words, calculation of an indicator establishes the objective value for some factor of interest related to the programme’s goals - at a given moment in time. Even if the factor itself is subjective, like the attitudes of a target population, the indicator is used to calculate its value objectively at a given time.

Thirdly, an indicator concentrates on a single aspect of a programme or project. It may concentrate on an input (e.g. the number of lectures given), an output (e.g. the number of people trained), or an overarching objective (e.g. the change in eating habits amongst community health workers). However, the indicator related to that factor will be narrowly defined in such a way that it captures that factor as precisely as possible. Note that a full, complete and appropriate set of indicators for a programme in a given context with specific goals and objectives will include at least one indicator for each significant element of the intervention.

We will use the term indicators to describe those programme measurements that are expressed in numerical terms, such as: . a percentage. For example, the percentage of children who are underweight; . a rate, such as infant mortality rate; . a ratio, for example, the number of nurses in relation to number of patients. Take a look at the reading by Bertrand et al (1996), noting in particular, pages 29-31.

READING

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 141 Bertrand, J. T., Magnani, R. J. & Rutenberg, N. (1996). Ch III - Methodological Approach: Program Monitoring. In Evaluating Family Planning Programs - with Adaptations for Reproductive Health. Usa: The EVALUATION Project, University of North Carolina: 29-39.

Sometimes, the basis of your evaluation may be difficult to quantify, for example, if you wished to answer the question “What is the availability of supervision and support for staff?” Here, while the “support for staff” may be available, we could not say how much support is available, unless we rephrased the question into a quantitative form, e.g. “How many institutions provide support and supervision for staff?” Or we might decide to quantify the support in terms of hours, in order to be able to get a measurement. Remember the issues noted in terms of the differences in approach between quantitative and qualitative approaches, both in relation to the items of interest, as well as the data collected and how it is collected. That debate is relevant here, and you might wish to revisit it in Unit 3 Session 3.

The reading by Feuerstein (1986) continues the discussion of indicators and expands our understanding, distinguishing indicators in relation to the aspects of the programme they measure, e.g. impact indicators or indicators of coverage.

READING

Feuerstein, M-T. (1986). Ch 2 - Planning and Organising Resources. In Partners in Evaluation. Evaluating Development and Community Programmes with Participants. London: Macmillan: 23-28.

4 IDENTIFYING INDICATORS

Now that you have clarified what we mean by indicators, check your understanding by seeing whether you can recognise indicators in the case study.

TASK 2 – IDENTIFY INDICATORS

Read the case study in the box below. a) Identify as many indicators as are given in the case study. They should be variables, measurements and should focus on a single aspect of the programme. b) If any of the measures do not qualify as indicators, change them so that they are expressed as a rate, a percentage or a ratio.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 142 c) What changes do the indicators reveal in this intervention?

d) What role did the indicators play in this intervention?

BOX 1. EVALUATION OF THE PROGRAMME TO IMPROVE THE HOSPITAL MANAGEMENT OF MALNOURISHED CHILDREN BY PARTICIPATORY ACTION RESEARCH

T. Puoane, D. Sanders, A. Ashworth, M. Chopra, S. Strasser, D. McCoy

Objective: To assess the effects of participatory action research in improving the quality of care of malnourished children in rural hospitals in South Africa.

Design: Pre- and post-intervention descriptive study in three stages: assessment of the clinical management of severely malnourished children, planning, and implementing an action plan to improve quality of care, and monitoring and evaluating targeted activities. A participatory approach was used to involve district and hospital nutrition teams in all stages of the research.

Setting: Two rural first-referral level hospitals (Mary Theresa and Sipetu) in Mt Frere District, Eastern Cape Province.

Main measures: Retrospective record review of all admissions for severe malnutrition to obtain patient characteristics and case fatality rates, detailed review of randomly selected cases to illustrate general case management, structured observations in the paediatric wards to assess adequacy of resources for care of malnourished children, and in-depth interviews and focus-group discussions with nursing and medical staff to identify barriers to improved quality of care.

Results: Before the study, case fatality rates were 50% and 28% in Mary Theresa and Sipetu hospitals respectively. Information from case studies, observations, interviews and focus group discussions revealed many inadequacies in knowledge, resources and practices. The hospital nutrition team developed and implemented an action plan to improve the quality of care and developed tools for monitoring its implementation and evaluating its impact. In the 12-month period immediately after implementation, case fatality rates fell by approximately 25% in both hospitals.

Conclusion: Participatory research led to formation of a hospital nutrition team, which identified shortcomings in the clinical management of severely malnourished children and took action to improve quality of care. These actions were associated with a reduction in case fatality rates.

FEEDBACK

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 143 a) The potential indicators in the above case study are: (i) case fatality rates; (ii) the staff’s knowledge of how to achieve quality of care; and (iii) adequate resources for caring for malnourished children.

b) Earlier in this session, we mentioned that indicators are expressed in a percentage, a rate or a ratio. In this example, points (ii) and (iii) do not qualify as indicators. You could change them to comply with the criteria for indicators as follows: ii) percentage of paediatric staff who demonstrate understanding of the ten steps in the management of malnutrition; iii) percentage of essential equipment required for care of malnourished children. c) The third question was: What changes do the indicators reveal in this intervention? The changes in the case fatality rates in this programme revealed that the quality of care in the two hospitals was improving. The key indicator, (i) case fatality rate, met the criteria of an indicator because it is a variable - that is, it varied or changed between the first measurements that were collected at baseline from the two hospitals (50% and 28%) and those collected in the months that followed. In the 12-month period immediately after implementation, case fatality rates fell by approximately 25% in both hospitals. This variable measured a single aspect of the programme, that is, reduction in case fatality rates, which is one aspect of the impact of intervention.

d) Finally, what role did the indicators play in this intervention? These indicators clearly showed that progress had been made as a result of the intervention, that is, an improvement in the quality of care, which resulted from interventions, was observed.

5 SESSION SUMMARY

Thus far, we have defined indicators. Hopefully this concept is now clear to you and we can move on to the process of selecting indicators. This is the focus of the next session.

6 REFERENCES

. Heywood, A. & Rohde, J. (2002). Using Information for Action: A Manual for Health Workers at Facility Level. Pretoria: The EQUITY Project & HISP, SOPH, UWC.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 144 Unit 4 – Session 2 Selecting Indicators

Introduction

In this session, we discuss how we select indicators for monitoring and evaluation purposes. For the purpose of this process, we will use the Results Framework that was introduced in Unit 1 - Session 5.

Contents

1 Learning outcomes of this session 2 Readings 3 Reviewing the components of a Results Framework 4 The process of selecting indicators 5 Setting scores for evaluation indicators 6 Session summary 7 References and further readings

Timing

There are no readings in this session and two tasks. It should take you about an hour and a half to complete.

1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session, you should be able to:

. Review of the components of a Results Framework. . Use a Results Framework to select indicators. . Analyse the process of developing scores for indicators.

2 READINGS

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 145 There are no readings in this session.

3 REVIEWING THE COMPONENTS OF A RESULTS FRAMEWORK

Look back at the Results Framework that you developed during Unit 1 – Session 5. Let us review the process of developing a Results Framework. In developing the Results Framework, your goal is broken up into objectives. You then conceptualise the activities that would lead to these objectives. The implementation of these activities leads to outcomes or results, and these outcomes are defined in advance as indicators, which can then be measured.

What follows is an example of a Results Framework such as was used in Unit 1 of this module. This type of framework represents (in the form of a diagram) a systematic analysis of the strategy employed by the programme to lead towards achieving the desired goal. The framework shows the logical and causal connections between various elements of the programme. Following the arrows from the bottom of the diagram upwards, you can see how the results at each level feed or flow into one another until the goal is achieved. At the top, you will notice the goal that the programme is working to achieve, i.e. Increased use of Family Planning, Maternal and Child Health services, and HIV/AIDS preventive measures.

The second layer of the framework (i.e. from the top) shows two intermediate results whose achievement is expected to lead to the achievement of the goal. They are: (a) Availability of quality services and (b) Demand for services. The boxes below that (which we could call the third layer) show the results or outcomes that lead to the second level results.

We are now going to select indicators to measure each of these stages, which will enable us to confirm those relationships.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 146 AN EXAMPLE OF A RESULTS FRAMEWORK

INCREASED USE OF FAMILY PLANNING, MATERNAL AND CHILD HEALTH SERVICES, AND HIV/AIDS PREVENTIVE MEASURES

a) Availability of quality b) Demand for services services

Information and services Clients’ knowledge of reproductive and increased child health improved

Practitioners’ skills and knowledge increased

Sustainable effective management

4 THE PROCESS OF SELECTING INDICATORS

The aim of this process is to determine the measurable results that can be anticipated from elements of the programme strategy. We will now work through the process of developing indicators by following three steps.

4.1 Step 1

The first step in selecting indicators is to identify monitoring and evaluation objectives, a process which you should already have completed. See Unit 1 Session 7. Objectives have been expressed as results or outcomes in the Results Framework.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 147 In one part of the Results Framework, the objectives that we want to achieve are:

. To ensure availability of quality services (which we will also refer to as Quality of Care). This will be achieved through:

- Increasing availability of information to health personnel working at the selected clinics. - Increasing types of services offered to patients at the selected clinics. - Empowering practitioners with skills and knowledge to diagnose and treat minor aliments. - Provision of continuous supervision to ensure effective management.

4.2 Step 2

From the objectives, we derive the activities that are required to achieve those objectives. Programme activities should be designed to achieve the small effects or results which will lead to the larger objectives. It is important that all the activities of the programme be logically connected with the objectives (the results). There does not have to be a one-to-one relationship for each. Some activities may have an effect on several results at different levels and some results may naturally be influenced by more than one single activity.

It is important that programme activities are closely aligned with intended programme results, otherwise it will be impossible to develop useful indicators or to perform an effective evaluation.

A SECTION OF THE RESULTS FRAMEWORK ABOVE ISOLATING RESULT (a)

The following diagram shows a subsection of the programme results from the previous framework, along with a list of some programme activities or interventions that programme planners might associate with these changes.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 148 STEP 2 RESULTS ACTIVITIES/INTERVENTIONS

Availability of quality Development of tools for services monitoring quality of care.

Management training for supervisors. Information and services increased Clinical training for providers.

Community-based Practitioners’ skills and support/supplies. knowledge increased Education Campaign programmes.

Sustainable effective management

Study the above Results Framework: in order to ensure quality of care, the programme manager needs to make sure that personnel providing services adhere to the planned activities (as shown in the programme framework), and that they have updated information and skills to provide the necessary services. They also need adequate resources to be able to provide quality of care. If all the above factors are attended to, the quality of care is more likely to be achieved.

4.3 Step 3

Once you have determined the concrete or measurable results that can be anticipated from the programme activities, the next step is to conceptualise how to detect whether improved Quality of Care, for example, has been provided. This is the process of deciding what you will measure to check these results. In other words, you will be developing indicators.

Each measurement must be discrete, that is, undertaken on one dimension of the results at a time. In general, more than one indicator can be used to measure a result. While it is a good idea not to overload a monitoring and evaluation plan with too many indicators, it can be risky to rely on a single

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 149 indicator to measure any significant effects of the programme. If the data for that one indicator becomes unavailable for some reason, or other problems occur, it will be difficult to make the case that you have had a significant impact on that result. In other words, some diversification tends to strengthen a monitoring and evaluation plan.

We will now develop indicators for the stated results, from the same section of the framework.

TASK 1 – PROPOSE INDICATORS FOR A PROGRAMME

Study the results and activities in the Results Framework and suggest indicators. In other words, decide what will show (or indicate) that this objective has been achieved.

FEEDBACK

Now check your answer with the examples in Table 1 below. You will notice that two indicators have been selected for each result. Indicator 1a aims to measure quality of care using a tool that was one of the activities planned in Step 2. What we have now done is to set the level of acceptability of quality of care at 85% or higher. This is called scoring and is discussed in the next section. The level of acceptability could have been set slightly higher or lower.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 150 TABLE 1: AN EXAMPLE OF PROGRAMME ACTIVITIES AND RELATED INDICATORS

PROGRAMME PROGRAMME INDICATORS COMMENTS OBJECTIVE RESULT

1. Ensure that Availability of 1a. % of facilities If a facility scores more quality services are quality services. scoring 85-100 on a than 85% on the available. tool developed evaluation of quality of specifically to measure care, this is an quality of care. indication that most of the activities needed to ensure quality of care are being carried out.

1b. % of facilities with Having a trained at least one trained provider in each of the provider in each services will ensure that targeted service (either quality care is provided. in rural or urban setting).

2. Information Information and 2a. Number of The number of health (disseminated) and services increased. Information Education education talks given services are Campaign programmes on the radio is an increased. on the radio in the past indication that year. information is being disseminated.

2b. % of facilities The % of facilities which providing all targeted provide the targeted services. service on an annual basis indicates that the services are increasing.

In the diagram that follows, the full range of scored indicators is presented. Study the diagram and try to work out how the indicators were selected.

* Key term for Step 3 diagram: CBD = Community Based Development

Step 3 – A Section of the Framework with Indicators

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 151 ACTIVITIES QUALITY PARTIAL FRAMEWORK WITH INDICATORS* Develop guidelines for procedures. SERVICES ARE * Develop a monitoring tool to monitor the implementation AVAILABLE of guidelines.

INDICATORS 1. % of facilities scoring 85-100 on Quality of Care checklist (rural, urban). 2. % of facilities with at least one trained provider in each targeted service area (either rural or urban).

INFORMATION AND ACTIVITIES * Develop appropriate messages. SERVICES ARE * Decide on dissemination strategies. INCREASED * Implement dissemination campaigns.

INDICATORS 1. # of Information Education Campaign programmes on radio in past year. 2. % of facilities providing all targeted services.

ACTIVITIES * Develop a training programme to PRACTITIONERS’ train service providers in clinical care. SKILLS AND * Implement the training programme. KNOWLEDGE ARE INCREASED INDICATORS 1. # of service providers who have completed clinical training. 2. # of CBD personnel who have completed training.

SUSTAINABLE ACTIVITIES * Empower supervisors with management EFFECTIVE and supervisory skills. MANAGEMENT * Ensure that resources needed for services 1. are available. * Ensure that the work environment is conducive to quality of care.

INDICATORS 1. % of supervisors who completed all training courses in management (national, district levels). 2. % of providers who report satisfaction with facility 5 SETTING SCORES FOR EVALUATIONmanagement INDICATORS and supervision practices. This means that they are satisfied with the way in which supervision is provided by management. You will also need to decide what you and3. Stocksyour continuouslystakeholders available deem for previousto be an 90 days, acceptable level of performance. This is calledcentral scoring Community your Based indicators Development. For programme supplies. example, an indicator such as Quality of4. CareStocks (QOC) continuously does available not suggest for previous what 90 days, would be an acceptable level of QOC. An evaluatordistrict CBD supplywould warehouses. score an indicator

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 152 based on his or her own experience, or on international or national norms (gleaned from a literature review). Explicitly scored indicators facilitate comparability of data, between programmes for example.

What follows is an example of developing a tool to score indicators. It relates to quality of service or care, and is based on Maternal and Child Health Services. Think for a moment about measuring the quality of care in a Child Health context concerned with weight measurement of children. How do we evaluate the quality of care provided by community health workers (CHWs) who have been trained in growth monitoring and promotion? One way would be to collect data about their skills in performing this function. As the designer of the evaluation, you will have to decide what will be an acceptable score for performance. You will have noticed that in Table 1, we set the level of acceptable performance by CHWs at 85%. You could decide that 75% is sufficient, or in a particularly life-threatening situation, you may decide that nothing below 95% performance is acceptable. To score such indicators, you would have to be familiar with the procedure of weighing children as well as what constitutes an acceptable level of skill on the part of CHWs in this process.

Table 2 below represents the data collected in the process of a quality of care evaluation of growth monitoring and promotion. Study the table and then answer the questions in Task 2.

TABLE 2: WEIGHING PRACTICES OF CHWS IN A CHILDREN’S GROWTH MONITORING AND PROMOTION SERVICE (N = 18) (N = NUMBER)

ACTIVITY YES NO

Did the Community Health Worker: N % N %

Ask the mother to remove the child’s clothing? 16 88 2 12

Set the scale to 0 (calibrate)? 13 72 5 28

Ask the mother to calm and put the child on the scale? 18 100 0 0

Correctly read the scale? 16 88 2 12

Tell the mother whether her child has gained or lost weight, or 13 78 5 22 stayed the same since last weighing? Interpret the curve to the mother? 13 78 5 22

Explain the purpose of growth monitoring to the mother? 10 56 8 44

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 153 TASK 2 – EXPLORE WEIGHT MONITORING DATA

Check your skills in reading the data in Table 2 through questions a-d, and then decide how these CHWs have fared in relation to the QOC indicator by answering question (e).

a) What percentage of community health workers asked the mother to calm the baby while putting them on the scale?

b) What percentage of CHWs read the scale correctly?

c) What percentage of CHWs explained the purpose of growth monitoring to the mothers or caregivers?

d) In which skill are the CHWs weakest?

e) Using our proposed scoring - that 85% and above constitutes an acceptable level of skills amongst the group of CHWs - how would you rate the quality of care in this facility?

FEEDBACK a) 100% (N=18) of community health workers asked the mother to calm the baby while putting them on the scale. b) Correct reading of the scale was done by 88% of community health workers. c) Only 56% (N=10) of the community health workers explained the purpose of growth monitoring to the mothers or caregivers. d) CHWs were the weakest in explaining the purpose of growth monitoring to the mothers. e) Using the proposed scoring of 85% and above, the performance of CHWs is acceptable, and therefore the quality of care in this facility is good.

As a programme manager, you may decide to add up all the scores achieved in the tasks, and then calculate the overall percentage; or you may look at individual task scores according to how critical this task is in the whole procedure. For example, explaining the purpose of growth monitoring to the mother is very important because if mothers do not understand the importance of growth monitoring, they may not bring their children for weighing. This could increase the prevalence of malnutrition, as children who are showing growth faltering (losing weight) will be missed.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 154 The same applies to telling the mother whether the child has gained or lost weight. This is a very important task. Mothers need to be informed about the child’s progress. This therefore emphasises the importance of knowing the programme you are evaluating very well and the relative importance of the tasks against which Quality of Care is being evaluated

Scoring some of your indicators is a necessary step that you should look out for in your own evaluation design. However, there is still more information to consider before rounding off the development of your own indicators.

6 SESSION SUMMARY

In this session, we have considered the process of developing indicators using a Results Framework as the basis for the process. In the next session, we add the concept of metrics to our understanding of indicators.

7 REFERENCES AND FURTHER READING

. USAID Center for Development Information and Evaluation. (1996). Selecting Performance Indicators. Performance Monitoring and Evaluation TIPS. Number 6. Washington: USAID:1-4.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 155 SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 156 Unit 4 – Session 3 Indicators with Metrics

Introduction

In this session, we add another dimension to the process of defining indicators. This dimension aims at increasing the specificity and clarity of indicators. We introduce the concept of metrics, used when we collect quantitative data. Metrics are precise numerical explanations of how the value of a quantitative indicator can be calculated.

We also introduce the process of operationalising indicators. This is the process of specifying exactly how the measurements will be taken. At the end of this session, you are encouraged to finalise your own indicators with metrics and to operationalise them. The session that follows, Session 4, provides further guidance on checking the quality of your indicators.

Note that much of the content of this session is most applicable in quantitative evaluations. However, take what you can to improve your evaluation practice in general.

Contents

1 Learning outcomes of this session 2 Readings 3 Operationalising indicators 4 Factors which influence the development of metrics 5 Session summary

Timing

This session contains two tasks and one reading. Because the second task focuses on your assignment, this session may take you about two hours to complete, that is, if you go through the activities, completing the relevant portions of your assignment. We strongly encourage you to take the time, it may save you time later.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 157 1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Draft indicators for a programme evaluation. . Operationalise a set of indicators. . Develop appropriate metrics for indicators.

2 READINGS

There is one short reading in this session.

Author/s Publication details Bertrand, (1996). Ch III - Methodological Approach: Programme Monitoring. In J. T., Evaluating Family Planning Programs - with Adaptations for Magnani, Reproductive Health. USA: The EVALUATION Project, University of R. J. & North Carolina: 29-39. Rutenber g, N.

3 OPERATIONALISING INDICATORS

In this section, we address the issue of operationalising indicators, i.e., specifying how an indicator is to be measured. Whereas a metric is a formula for calculating the indicator, operationalising is the process of pre-specifying all the elements of the indicator. To operationalise an indicator is to decide how a given concept or behaviour will be measured.

Here is an example to illustrate how an indicator for the case study of Primary Prevention of Cardiovascular Diseases was operationalised.

Remember that the objective of the training was: To build the capacity of the Community Health Workers (CHWs) in making healthy choices about the food they eat and the physical activity they engage in.

The indicators in question were: . % of CHWs who consume food with high fat content more than three times a week; . % of CHWs who are physically active, or who exercise three times a week.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 158 Here is the process of operationalising these indicators: “Foods with high fat content” was defined as including full cream milk, chicken with skin and red meat with untrimmed fat.

“Consumption of foods with high fat content” was defined as consumption of the above foods 3-4 times a week.

“Physically active” was defined as engaging in exercises for 30 minutes or more, three times a week.

Can you think why we operationalise indicators? The reason is this: Indicators should be conceptually clear, unequivocal and unambiguous measurements. Every effort should be made to design and define indicators so that the way they will be operationalised, or put into effect, will be as transparent as possible. This is to reduce as much as possible the subjective element in interpreting what the indicator means. Otherwise, people involved in data collection will tend to make subjective judgments, or use unclear units, terms, or yardsticks for calculating and gauging indicator values, which would ruin – or muddy – the findings, making them less reliable.

Let’s look at each of these situations in more detail.

. Measurement of some indicator terms is inherently subjective. For example, quality, leadership, improvement, establishment of a supervisor or management system, implementation of a policy, networking, or advocacy. In some cases, the solution to such subjectivity would be to re- design the indicator to specify a more precise, singular dimension or result. However, sometimes subjectivities cannot be avoided, in which case very precise definitions must be agreed upon. Where measurement unavoidably requires an opinion from experts or others involved in the monitoring and evaluation process, careful thought and caution should be used when reporting the results, interpreting them, comparing them over time and using them in decision-making.

. Another pitfall in defining indicators is that local conditions can affect the measurement of an indicator. You would need to operationalise your indicators in terms of what data is available locally. For example, a particular health facility may define new acceptors (of contraception) as consisting of only women who for the first time ever begin using any (modern) contraception. At other facilities, they may define it as any client who starts any method at any time. If you do not operationalise your indicators or tailor them to the kinds of records that services for Family Planning keep on a continuing and accurate basis, it may not be feasible to access the data you need. The lesson is that sometimes you will need to tailor your definition to the data that is locally available.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 159 . In some cases, indicators will be difficult to operationalise without a clear yardstick. For example, “the cost of one month’s supply of contraceptives” does not make clear how cost will be defined. Will this be per person or as an average? Should the average cost be weighted by the proportion of the surveyed (or statistical) population using each different method of contraception? For which month will contraceptive costs be calculated? Costs of contraceptives may vary over the course of the year, especially in areas of high or seasonal migration. They may also be affected by unstable or fluctuating currency values or economic inflation.

All these issues need to be addressed in the operationalising process. They need to be agreed upon by relevant stakeholders, recorded and used in every stage of monitoring and evaluation, including planning, implementation and interpretation or use of the results. All this ensures that the indicators will be useful to those who need to use them.

Take a look at the reading by Bertrand et al (1996), pages 33-34, to expand on your understanding of this process.

READING

Bertrand, J. T., Magnani, R. J. & Rutenberg, N. (1996). Ch III - Methodological Approach: Programme Monitoring. In Evaluating Family Planning Programs - with Adaptations for Reproductive Health. USA: The EVALUATION Project, University of North Carolina: 29-39.

4 FACTORS WHICH INFLUENCE THE DEVELOPMENT OF METRICS

In defining indicators, the final step is to develop a metric for each indicator. A metric is, simply, a precise explanation of the data and a calculation (of the data) that will give the measurement or value of the indicator. In other words, it specifies the data that will be used to generate the value of the indicator, and how the data elements will be manipulated to arrive at a value.

Defining good metrics is absolutely crucial to the usefulness of any monitoring and evaluation process. A good metric clarifies a single dimension of the result that is being measured by the indicator. A good metric does this in such a way that each value measured for the indicator is exactly comparable to values measured at another time. You can be entirely confident that the values of the indicator at the baseline, at each time measurement, and in the final analysis will all be valid and comparable figures for gauging the degree and direction of effectiveness demonstrated by the project activities.

Indicator metrics are usually made up of a numerator (top number) that is divided by a denominator (bottom number)

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 160 Numerators are things we count: numbers of clients, infants immunised, new cases of TB, number of doctors, number of children who died etc.

Denominators are the group with which the things we count are compared: total population, all births in a year, number of adults or numbers of clinics, total miles travelled, number of beds in a hospital

Indicator= Numerator X 100 ------% Denominator

Here are four examples of metrics to help you understand the process of developing metrics.

EXAMPLE 1: CASE FATALITY RATE IN A PROGRAMME TO IMPROVE THE HOSPITAL MANAGEMENT OF MALNOURISHED CHILDREN.

In a particular hospital, the case fatality rates (indicator of quality of care in this programme) were calculated as follows:

Divide the number of children who died during a hospital stay (the numerator) by the total number of children admitted with malnutrition during a year (the denominator), i.e. 40 children were admitted with malnutrition in 2001; of them 10 died. We therefore divide 10 (the numerator) by 40 (the denominator), which equals 0.25. We then multiply the product by 100, to get a percentage, which constitutes the case fatality rate, or the number of cases which died as a result of a particular condition, i.e. malnutrition.

10∕40 x 100 = 25% (i.e. the percentage of children who died during the year).

The numerator: total number of children who died during admission (over the period of a year). The denominator = total number of all children admitted with malnutrition (over the period of a year).

This value is comparable with the values collected at baseline because the same formula was used at baseline and during the follow-up after 12 months.

NOTE: Some metric formulas are multiplied by 100, i.e. expressed per 100, as in the above example. (Recall: a % is a given number per 100, or in a 100.) Some are expressed (as) per 1000, 10 000, 100 000 etc For example, Maternal Mortality Rate: The number of women who die from complications of pregnancy is given per 100,000 live births. In some instances, indicators may have nationally or internationally agreed upon operationalized indicators and thus developed metrics (a standard indicator). Please check your local District Health Information System (DHIS) for correct formulas for different indicators.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 161 Returning to the Results Framework used in the last session, we show here how metrics would be developed for each of the indicators in the framework.

EXAMPLE 2: INDICATORS WITH METRICS

AVAILABILITY OF Metrics: QUALITY SERVICES 1a. Numerator: # of rural facilities scoring 85 or better on a QOC checklist. Indicators: Denominator: total # of rural facilities that 1. % of facilities scoring 85-100 were checked and scored. on Quality of Care (QOC) 1b. Numerator: # of urban facilities scoring (QOC) checklist (rural & urban). 85 or better on QOC checklist. Denominator: total # of urban facilities that were checked and scored. 1c. (Could also calculate an aggregate percentage).

2. % of facilities with at least 2a. Numerator: # of (rural, urban) one trained provider in targeted facilities with at least one trained provider service areas (rural, urban). in Maternal and Child Health (MCH). Denominator: total # of (rural, urban) facilities in MCH.

Note that a single indicator may have more that one metric. Each metric may calculate the value for a sub-population, e.g. a rural or an urban population. For example, there are three metrics for Indicator 1 above because we are trying to evaluate the availability of services for both urban and rural settings. We can either calculate them separately or combine them using metric 1c.

Different metrics can be developed for the same indicator, to address different areas of interest from a programmatic point of view.

For example, the two different metrics in Example 3 could be developed for the one indicator.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 162 EXAMPLE 3: INDICATORS WITH METRICS

GROWTH MONITORING ACTIVITIES AT A PRIMARY HEALTH CARE FACILITY

UTILISATION OF GROWTH MONITORING AND PROMOTION SERVICES IS INCREASED

INDICATOR: # of caregivers who utilise the growth monitoring and promotion services.

METRICS: Numerator: the number of caregivers who attend health education programmes related to growth monitoring and promotion. Denominator: the number of children below the age of 6 in a given district. AND / OR Numerator: Number of caregivers who bring their children for growth monitoring and services (over a year). Denominator: Number of children below the age of 6 in a given district.

Example 4, which is based on the case study on primary prevention of CVD, shows that you need to be very specific about what you mean by each concept, e.g. exercise. It could mean those who exercise frequently, i.e. 3 times a week, or those who at least take some exercise. There would be two different metrics for these two different situations.

EXAMPLE 4: INDICATORS WITH METRICS

PRIMARY PREVENTION OF CARDIOVASCULAR DISEASE

ENGAGEMENT IN PHYSICAL ACTIVITY (EXERCISES)

Numerator = number of CHWs who exercise three times a week. Denominator = number of all CHWs who attended training. OR Numerator = number of CHWs who at least take some exercise. Denominator = number of all CHWs who attended training.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 163 TASK 1 – DEVELOP A METRIC FOR AN INDICATOR

Khayelitsha is a black township in Cape Town, South Africa. The total population is about 800 000 to 1 million, of which 500 000 is youth. The majority of the youth are not at school, nor are they employed, and the prevalence of HIV is about 34%.

Imagine that you are working in a programme aimed at increasing awareness about STIs, HIV and AIDS among the youth residing in Khayelitsha. Four hundred of the youth are enrolled in this programme. a) Develop indicators to help you measure the effectiveness of the programme in increasing the knowledge of the youth who participate in your programme about STIs, HIV and AIDS. b) Identify the numerator and the denominator. c) Use a metric to develop as many indicators as you can.

FEEDBACK

Numerator = number of youth who demonstrate understanding of causes of STIs, HIV and AIDS. Denominator = total number of youth who participate in the programme, i.e. 400.

If you multiply the product by 100, you will come up with the % of youth who demonstrate understanding of causes of STI, HIV and AIDS amongst your target group.

You can then go on to training on prevention, treatment etc

TASK 2 – DEVELOP YOUR OWN INDICATORS

You should now be in a position to develop indicators for your programme evaluation. a) Develop indicators for your programme evaluation b) Operationalise these indicators c) Design metrics for the indicators where necessary. Spend time on this now.

FEEDBACK

At this stage, check whether your indicators are clearly defined.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 164 5 SESSION SUMMARY

In this session, we have presented the process of refining your indicators through operationalising them and developing metrics for their calculation. By now you should have developed a set of indicators for your programme to submit as part of your evaluation plan (Assignment 1).

In the next session, you have the opportunity to check your indicators against a set of criteria for good indicators. So work through the next session before submitting yours for your assignment.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 165 SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 166 Unit 4 – Session 4 Characteristics of Good Indicators

Introduction

In this session, we discuss the characteristics of good indicators. A good indicator is one which serves its purpose effectively by accurately and visibly measuring change within a single aspect of a programme. We will go through each of the characteristics, such as validity, in some detail. Each time you will be asked to apply them to your own programme context. In the course of this session, you should further develop a set of indicators for your programme evaluation. Use this session to reflect critically on your own selection of indicators for your programme.

Contents

1 Learning outcomes of this session 2 Readings 3 The characteristics of good indicators 4 Additional factors which influence indicator selection 5 Session summary 6 References and further reading

Timing

This session contains two readings and eight tasks. It should take you about two hours to complete.

1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Apply a set of criteria to evaluate the quality of indicators.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 167 2 READINGS

There are two short readings in this session.

Author/s Publication details USAID Center for (1998). Guidelines for Indicator and Data Quality. Performance Development Monitoring and Evaluation TIPS. Number 12. Washington: USAID, Information and Washington: 1-12. Evaluation.

Bertrand, J. T., (1996). Ch III - Methodological Approach: Program Monitoring. In Magnani, R. J. & Evaluating Family Planning Programs - with Adaptations for Rutenberg, N. Reproductive Health. USA: The EVALUATION Project, University of North Carolina: 29-39.

3 THE CHARACTERISTICS OF GOOD INDICATORS

Think about this question: What makes a good indicator?

Most importantly, a good indicator must be a valid and reliable measure of the result.

Validity: An indicator is valid when it measures the result it is designed to measure, conceptually and in actual terms.

Reliability: An indicator is reliable when it minimises measurement error.

We have, however, provided a total of six criteria that you can use to assess your indicators. Four other desirable characteristics are listed here, all of which serve as aids to help guide the design of indicators and metrics toward this ideal of validity and reliability. They are: Precision: Indicators should be operationalised with clear, well-specified definitions. Independence: Indicators should be non-directional and capture a single dimension of a programme result, so that their values clearly depict a specific level of performance and programme effectiveness at a specific moment. Non-directional means not trying to capture the process of improvement or decline. Timeliness: Indicators should be measured at appropriate intervals relevant in terms of programme goals and activities. Comparability: Where possible, indicators should be structured using comparable units, denominators, and other ways that will enable increased understanding of impact or effectiveness across different population groups or programme approaches.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 168 TASK 1 – EVALUATE AN INDICATOR

Consider any one of the indicators presented in any of the previous sessions, or one of your own, and decide whether it is a good indicator. Use the criteria presented above to judge the indicator you have chosen.

FEEDBACK

Consider this evaluation of an indicator and compare yours with it.

ACTIVITY/RESULT INDICATOR Information (disseminated) and service 1. Number of Information Education provision increased. Campaign programmes on the radio in the past year.

This indicator is valid because it matches the result, which was “the dissemination of information about health”. It is reliable because it is a fairly straightforward thing to measure a number of such programmes. It is precise because it measures the number of programmes and there is no ambiguity. It is independent because it only measures one dimension which is not dependent on other factors. We cannot assess whether it is timely, but the logical time to use the indicator would be at the end of the year. It could be comparable with other evaluations.

In the next section, we will discuss each of the characteristics of good indicators and identify examples of good and problematic indicators.

3.1 Validity

Validity may be the most important characteristic of a good indicator. A valid indicator is one that accurately measures the phenomenon it is designed to measure. In other words, the indicator provides valid information about the target or result it aims to measure, in a direct and focused way. Clearly then, we can see that the validity of an indicator is an attribute that can only be evaluated in the context of the result or phenomenon it is aiming to measure.

It may sometimes be impossible to use what appear to be the most valid indicators to measure results, for practical reasons such as costs, or material or logistical obstacles. Such obstacles may prevent collecting all the data that would ideally be necessary. In that case, the next best thing is to use a proxy indicator.

A proxy indicator is one that does not capture the exact concept or single aspect of your activity’s result, but aims to measure a concept that

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 169 approximates the true or ideal indicator. For example, the type of material used to build a house in rural areas, and ownership of certain items such as a television set, refrigerator, etc could be used as proxy indicators for socio- economic status.

In your monitoring and evaluation plan, you should make sure to note where you will be using proxy indicators, and your reasons for using them. At a later stage, it may become possible to collect other data for the original indicator. On the other hand, if uncertainty exists about getting access to particular data, it may be prudent to think of proxy indicators for which the data may be easier or cheaper to collect.

The USAID (1998) reading provides some important points to reinforce your understanding of these criteria. Validity is considered on page 6.

READING

USAID Center for Development Information and Evaluation. (1998). Guidelines for Indicator and Data Quality. Performance Monitoring and Evaluation TIPS. Number 12. Washington: USAID, Washington: 1-12.

TASK 2 – DEVELOP INDICATORS FOR YOUR PROGRAMME

Can you think of a valid indicator and a proxy indicator for your programme?

To assist you, look at the following example:

Survey questions on ideal family size are not generally thought to be very valid measures of fertility demand. Why do you think this is so? What are better indicators of fertility demand and why?

The answer to this question is that survey questions on ideal family size only indirectly indicate a person’s actual fertility demand. Stated intention to have more children is more valid as an indicator of demand because it is focused on the individual and her likely personal choices and decisions.

3.2 Reliability

Reliability or minimisation of measurement error is at least as important as validity. For one thing, there is no simple tactic, like proxy indicators, in cases where monitoring and evaluation planners face problems with indicator reliability. All indicators and metrics need to be examined critically to assess ways of reducing measurement error that might otherwise creep into programme monitoring and evaluation.

Measurement error is a critical issue because indicators are used to assess programme performance and may affect its future. If changes in indicator

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 170 values merely reflect random or systemic errors in their measurement, conclusions about programme efficiency or effectiveness will not be accurate.

You will find measurement error discussed further on page 6 of this reading.

READING

USAID Center for Development Information and Evaluation. (1998). Guidelines for Indicator and Data Quality. Performance Monitoring and Evaluation TIPS. Number 12. Washington: USAID, Washington: 1-12.

In monitoring and evaluation, problems in measurement may commonly arise from sampling error, non-sampling error or subjectivity. In brief, sampling error occurs where the sample taken to estimate population values is not a representative sample. An example of sampling error could occur with non- random sampling, resulting from an overrepresentation of urban populations, because access to them is quicker and cheaper.

Non-sampling error includes all other kinds of mis-measurement that may occur, such as courtesy bias, inaccurate or incomplete records, or non- response rates. An example of courtesy bias is when an evaluator chooses respondents because s/he feels obliged to do so.

Subjectivity also introduces measurement error. Subjectivity occurs where the indicator’s value will be influenced by the impression and sentiments of the evaluator’s values. Such subjectivity will not be comparable over time or across geographical units or populations.

TASK 3 - ENSURING RELIABILITY

Can you think of ways in which non-sampling error and subjectivity could influence the reliability of your indicators?

FEEDBACK

An example of non-sampling error is: Survey estimates are not necessarily accurate reflections of abortion incidence. They might be used because of response bias or the reluctance of respondents to report abortions.

An example of subjectivity involves many “quality indicators” (e.g. quality of care, leadership, supervision, etc.) call on the personal judgment of the data collector or analyst. Other examples which call for personal judgement are “policy environment” and “political progress” indicators.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 171 3.3 Precision

An indicator should be defined in precise, unambiguous terms that clearly describe exactly what is being measured. Where practical, the indicator should give a relatively good idea of the data required and the population among which the indicator is going to be measured.

Precision seems like an obviously desirable attribute of indicators, but it deserves emphasis. Many indicators in common use are not clearly defined; for instance, “new user”, “knowledge of AIDS”, “quality of care”, or “trained provider” can all imply different things in different circumstances. The more you can spell out the indicator, the less room there will be for later confusion or complications.

TASK 4 – MAKING YOUR INDICATORS MORE PRECISE

Take a few minutes to write definitions of “knowledge of AIDS”, “quality of care”, “trained provider”. Ask a colleague to do the same and compare your definitions with theirs. You will probably find that they differ.

FEEDBACK

I expect you found definite differences in their definitions. If there is no precise or standard definition of an indicator, everyone will rely on her or his knowledge, and this may affect reliability of the indicator. For example, “knowledge of AIDS” may mean different things to different individuals. It may mean: . knowledge of the definition of AIDS - Autoimmune Deficiency Syndrome; . knowledge of causes, that is, that HIV causes AIDS; . knowledge of signs and symptoms of AIDS, etc.

3.4 Independence

The characteristic of independence captures the idea that the value of the indicator should stand alone. It is best to avoid ratios, rates of increase or decrease (i.e. directional definitions). The baseline and subsequent values for the indicator will demonstrate that movement anyway.

At the same time, the ideal is to design indicators that measure more complex results of a single dimension at a time. For instance, rather than constructing some kind of index to measure “quality of care” in an aggregated, overall way, it is preferable to have separate indicators that would measure different dimensions of the phenomenon, e.g. time spent on counselling, technical skills, clients’ satisfaction, etc, or other key aspects of quality of care.

Increasing the independence of indicators contributes to their validity through clarifying the concepts used within the indicators. Independent indicators are

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 172 also easier to interpret. If certain items in a ratio improve and others decline, the overall ratio will tell you very little about these possibly crucial internal elements. A set of more disaggregated indicators for a complex result, however, should provide a clear signal of which activities are performing relatively better than others, or if all indicators are on track.

TASK 5 - EVALUATE INDICATORS FOR INDEPENDENCE

a) Look at the following indicators. Which ones do not fit the criterion of independence? Why? b) Evaluate the indi cators you developed for your programme: do they meet the criterion of independence?

INDICATOR Is it Why does it fit or not fit the criterion of independent? independence? Sustainability Healthier families Health-seeking behaviour Length of hospital stay Knowledge of diseases or knowledge of health- seeking behaviour Policy environment Policy improvement Rate of weight gain Eating patterns Patient satisfaction Percent underweight individuals

FEEDBACK

The following indicators do not fit the criterion of independence: Sustainability, healthier families, health seeking behaviour, knowledge of diseases or knowledge of health-seeking behaviour, policy improvement, policy environment, eating patterns.

This is because they are too broad and need to be unpacked to be useful. In other words, they do not measure one dimension on its own. For example, sustainability could be unpacked into: available resources, provision of service and utilisation of services.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 173 3.5 Timeliness

Timeliness affects not only the indicators themselves, but also the data collection schedule, in the context of reporting schedules. Since indicators are tools for measuring results, the data used to construct them should be collected after a period sufficient for programme activities to have made a measurable impact. We have mentioned this before in terms of the timeliness of embarking on a monitoring and evaluation process.

Again, although this may seem obvious or self-evident, data collection may often be affected by the government’s reporting schedule, your programme partners’ schedules, or those required by your own headquarters. To whatever extent is possible, these logistical factors should be taken into account in indicator design.

Here is an example to illustrate this issue: If your condom-social-marketing partners compile routine statistics every six weeks, it might be better to design an indicator counting condoms distributed in the last six weeks rather than in the last 30 days, for example. Other timeliness factors which should be taken into account would include the periods for which reporting sub-units, such as clinics, may compile statistics.

Another consideration might be the degree to which surveys should rely on the respondents’ memories for retrospective evaluations. This applies to asking a question like: How many times a week did you exercise last year or how many times a week did you eat pumpkin last year?

The timeliness criterion also applies to the period during which one might reasonably expect change in some variables in a country’s population, such as mortality or fertility rates. Can you think why?

The answer is that the mortality rate cannot change in one year; it may take as long as five years or more for change in this variable to be visible.

Timeliness contributes to reliability. If you are trying to measure impacts that have not had time to occur, or have occurred over such a long period that many other factors have intruded, there will inevitably be more “noise” in your data. Although you can correct for this noise or filter it out with some of the relatively more sophisticated methods typically used in evaluation, it is better to bear this issue in mind in advance. Timeliness for general monitoring indicators can be taken into account by communicating with the full range of stakeholders in the planning stage regarding logistics, reporting, and their monitoring and evaluation needs. Since the questions involved are intrinsically very practical, it is one way to get partners who are less comfortable with analytical or abstract discussions more involved in the whole process.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 174 Increasing their sense of ownership helps to ensure that data collection and other reporting-related tasks are more likely to be completed on a timely and appropriate basis.

TASK 6 – EVALUATE YOUR PROGRAMME PLAN AND INDICATORS FOR TIMELINESS

Think about your own programme: Did you consider timeliness in planning for evaluation?

3.6 Comparability

“Disaggregating people-level program results by gender, age, location, or some other dimension is often important from a management or reporting point of view. Experience shows that development activities often require different approaches for different groups and affect those groups in different ways. Disaggregated data help track whether or not specific groups participate in and benefit from activities intended to include them. Therefore, it makes good sense that performance indicators be sensitive to such differences.” (USAID, 1996: 4)

Indicators should also be comparable across different population groups and programme approaches. At times you may need disaggregated indicators for some activities by population group, by region, by type of facility, etc.

Here are some examples for clarification.

Example 1: Breastfeeding indicators - timing, exclusivity, duration, etc.

For comparability, you need to find out how the questions used to collect such indicator data were asked. To find out about timing, one could ask: Are you breastfeeding now? Do you intend breastfeeding your baby? Did you breastfeed your youngest child?

To find out about exclusivity, you could ask: In addition to breast milk, did you give your child anything else to drink? What else did you give your child to drink, including breastfeeding? How long do you intend to breastfeed? How long did you breastfeed your youngest child?

One of the best ways to help you collect information that is comparable to other programmes, is to do a literature search and talk to as many people as you can. You would also need to know exactly what questions have been asked in other contexts to gather data for such indicators. Even if programmes are collecting information about exclusive breastfeeding, the information may not be comparable if the questions asked were different.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 175 A programme with different approaches toward Public Health Nutrition goals e.g. a clinic, a community-based project, social marketing, should identify indicators (e.g. of service utilisation) appropriate to all modalities where possible, so that the results or effectiveness of the given activities can be compared across service delivery approaches.

Additional comparability beyond narrow or immediate results reporting is also desirable. To achieve this, think about further uses of your programme’s data and results. Try to ensure that comparability, in the broader scope of improving the effectiveness of health and development programmes, will not be impaired by particularly narrow or unique indicators, as their values would be difficult to compare with other programmes’ results.

For instance, if the general standard for gauging Contraceptive Prevalence Rates is a percentage of all women aged 15-49, you should not construct your Contraceptive Prevalence Rate indicator as a percentage of unmarried women aged 19-45. If you did, you would experience problems in comparing your data with those collected by other people, unless there is a very strong reason for you to break down the information in exactly that way.

While this does not particularly contribute towards validity or reliability, it is still a good idea to select generally comparable indicators within and across programme approaches, and certainly across relevant population groups where activities warrant such a breakdown.

TASK 7 – EVALUATE YOUR INDICATORS FOR COMPARABILITY

Using the indicators that you have developed for your programme, what questions were asked? Are they comparable with other programmes of a similar kind?

4 ADDITIONAL FACTORS WHICH INFLUENCE INDICATOR SELECTION

In an ideal world, indicators judged to be of the highest quality and most useful, would be the ones selected and used to monitor and evaluate the effects of programme activities. However, in the real world and in field settings, many other factors may intervene. Ideal indicators may not be practical; the feasibility of certain indicator designs can be constrained by data availability, resources, programmatic or host government needs and donor requirements and needs.

Here is an example of Availability of Data presenting a barrier to using a particular indicator: Some data may be considered privileged information by agencies, projects or government officials. Data may be available only in

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 176 aggregated form or already calculated into indicators that may not be the ideal indicators for your programme or activities.

Resources can also present a barrier to particular indicators: Ideal indicators might, for example, require the collection of data to calculate an unknown denominator, or national data to compare with project area data, or tracking lifetime statistics for an affected and/or control population, etc. In addition, the costs of collecting all of the appropriate data for ideal indicators is usually prohibitive. Human resources and technical skills, particularly for evaluation, may be a constraint as well.

Programmatic and donor requirements can also present barriers to the use of particular indicators: Indicators may be imposed from above by people not trained in monitoring and evaluation techniques. Reporting schedules may not be synchronised, (e.g. fiscal year versus the reporting year). Different stakeholders’ priorities may also diverge.

Take a look at the following reading, pages 32-33, to reinforce these points.

READING

Bertrand, J. T., Magnani, R. J. & Rutenberg, N. (1996). Ch III - Methodological Approach: Program Monitoring. In Evaluating Family Planning Programs - with Adaptations for Reproductive Health. USA: The EVALUATION Project, University of North Carolina: 29-39.

TASK 8 – REVIEW FACTORS THAT MAY AFFECT THE SELECTION OF INDICATORS

In your experience, what factors, other than the desire to select the best and most appropriate indicators, could affect the selection of monitoring and evaluation indicators in your programme?

4.1 How Many Indicators Should One Collect?

One more issue to consider is how many indicators to collect.

A rule of thumb would be:

. At least one or two indicators per result. . At least one indicator for every activity. . No more than ten or fifteen indicators per area of significant programme focus (e.g. quality of care).

5 SESSION SUMMARY

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 177 In this session, we have explored six characteristics of good indicators in a fair amount of detail, and considered some of the other factors that may affect the choice of indicators. You should also by now have reviewed your selection of indicators in these terms. In the next session, we explore some of the issues that affect the collection of data.

6 REFERENCES AND FURTHER READING

. USAID Center for Development Information and Evaluation. (1996). Selecting Performance Indicators. Performance Monitoring and Evaluation TIPS. Number 6. Washington: USAID, Washington: 1-4.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 178 Unit 4 – Session 5 Data Systems

Introduction

Data System is a term we use to talk about the whole range of data that must be collected for effective programme management and implementation. It includes the indicators for the performance monitoring and evaluation plan, and all the data and other information that needs to be systematically gathered and understood to support programme management and implementation.

In this session, we survey the range of data that comprises the system.

Contents

1 Learning outcomes of this session 2 Readings 3 Characteristics of a strong data system 4 Levels of data 5 Sources of data and tools for data collection 6 Collecting data 7 Session summary

Timing

This session has three readings and four tasks. It should take you about three hours to complete.

1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Identify characteristics of a strong Data System. . Identify levels of data. . Identify sources of data and tools for data collection.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 179 2 READINGS

There are three readings in this session, two of which you have previously encountered.

Author/s Publication details Patton, M. Q. (1997). Utilization-Focussed Evaluation. (3rd edition). Thousand Oaks, Ca: Sage Publications: 159-169.

Tones, K. & Tilford, (2001). Ch 1 - Successful Health Promotion: The Challenge. In Health S. Promotion: Effectiveness, Efficiency and Equity. (3rd edition). Cheltenham, UK: Nelson Thornes: 2-18.

Bertrand, J. T., (1996). Ch III - Methodological Approach: Program Monitoring. In Magnani, R. J. & Evaluating Family Planning Programs - with Adaptations for Rutenberg, N. Reproductive Health. USA: The EVALUATION Project, University of North Carolina: 29-39.

3 CHARACTERISTICS OF A STRONG DATA SYSTEM

The basis of any monitoring and evaluation activities that you plan to undertake is the data that you collect. The whole range of such data is called the Data System and it includes your indicators for the monitoring and evaluation plan, the data and other information.

A set of characteristics of an effective or strong data system follows below. Any manager would do well to check the Data System against this list in the course of planning for a monitoring or evaluation process.

A strong Data System should: . Include an appropriate range and number of clearly-operationalised indicators. . Draw on a variety of appropriate data sources and kinds of data. . Include baseline and target values for each indicator, if appropriate for the particular operational context. . Spell out a data collection plan and schedule, including estimates of the financial and technical resources that will be required to achieve each element of that plan. This should be done in such a way that all stakeholders are aware of and commit to their share of responsibility for ensuring that the Data System functions as designed. It is important that the Data System for a specific monitoring and evaluation plan should be designed in conjunction with all partners and stakeholders who will contribute resources or data to these activities. It is also advisable to include programme activity partners, even if they will not actively participate in

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 180 monitoring and evaluation efforts, because programme decisions that will affect them will at least be partly based on monitoring and evaluation results. Their involvement in your monitoring and evaluation planning is an opportunity for them to initiate their own monitoring and evaluation efforts, which will be of benefit to all.

4 LEVELS OF DATA

To identify the data needed for measuring the selected indicators and appropriate sources of this data, different levels of data must be considered. There are four levels of data in general use, plus the additional perspective of spatial or geographic data required in particular contexts. These levels are:

. Policy or programme level data. This information tends to be at a high level of aggregation, often on a national scale, although it may also be prepared for regional, district, state or other areas. A high level of aggregation means that this information may only give you a general picture of the whole population or community, but not of a sub-population. Examples of policy or programme level data include: policy on Integrated Nutrition Programmes; policy on Family Planning programmes, etc. An example of national indicators: National immunisation coverage is 85% while Eastern Cape immunisation coverage is 45%. However, unless you have data for your local area, you may be misled by data collected at a higher level of aggregation.

. The service environment level of information is data which pertains to operations at the service delivery point.

. Client level information is data that pertains only to persons interacting with the service environment. It differs from population level data because it does not include the entire population, or even potential clients in a given catchment area, but includes only actual-client data.

. Population level data is information about all relevant persons in the area.

. Spatial or geographic information, the least used level of data in monitoring and evaluation, is used more in evaluation than in monitoring, and provides an added perspective on data that assists with analysis and/or contextual understanding. The sources of such data include information collected through surveys.

These two readings expand on the issues of data collection. Read them now to broaden your understanding of the different levels of data that may be available, and the issues involved in their usage.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 181 READINGS

Patton, M. Q. (1997). Utilization-Focused Evaluation. (3rd edition). Thousand Oaks, Ca: Sage Publications: 159-169.

Tones, K. & Tilford, S. (2001). Ch 1 - Successful Health Promotion: The Challenge. Health Promotion: Effectiveness, Efficiency and Equity. (3rd edition). Cheltenham, UK: Nelson Thornes: 2-18.

It is very important to design a diversified Data System. Indicators should report on the effectiveness and efficiency of programme activities at a number of levels, drawing on a number of types of data or other information from a variety of sources. For example, a Data System would be extremely weak if it were based entirely on indicators at population level; it might rely on a national survey every five years to provide the data to calculate the indicators, and would be vulnerable to many disruptions. A better system would be to have some population level indicators, and some disaggregated (or broken down) level indicators. For example, a system may have national indicators, as well as provincial and district indicators.

5 SOURCES OF DATA AND TOOLS FOR DATA COLLECTION

In developing a sound monitoring and evaluation plan, it is useful to explicitly identify data sources so that everybody involved knows where to obtain data. It is also important to identify the tool or method which will be used to collect the data. For example, in order to collect information on eating patterns of a particular population, you may want to collect data from the community, and the tool for collecting this data could be a survey.

We will now explore the sources of the different levels of data which were described above.

5.1 Policy and Programme Level Data

TASK 1 – IDENTIFY SOURCES AND USES OF POLICY AND PROGRAMME LEVEL DATA a) Think about your own experience of data: Have you ever collected policy level data in the past? What were the sources of this data? How were the data used? b) Can you think briefly of sources of data and tools for collecting such information at the policy or programme level?

FEEDBACK

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 182 b) Examples of sources of data for programme or policy level data are: . Official documents (e.g. legislative and administrative documents) . National budgets or other accounts . Policy enquiries . Reputational rankings (e.g. programme effort scores)

Tools for collecting programme or policy level data are: . Indexing questionnaires (for country specialists and rankings) . Special/contract studies

Take a look at the reading, pages 34-39, to consolidate your understanding.

READING

Bertrand, J. T., Magnani, R. J. & Rutenberg, N. (1996). Ch III - Methodological Approach: Program Monitoring. In Evaluating Family Planning Programs - with Adaptations for Reproductive Health. USA: The EVALUATION Project, University of North Carolina: 29-39.

5.2 Service Environment Level Data

An example of service environment level data are records of consultation processes (practices) at a clinic for STIs. The process can be broken down into the following activities: . Treatment prescribed . Was sex discussed during treatment? . Was use of condoms discussed? . Was treatment of sexual partner discussed?

This data can be collected from the clinic register or from client records.

TASK 2 - IDENTIFY SOURCES AND USES OF SERVICE ENVIRONMENT LEVEL DATA a) Have you collected service environment level data in the past? b) What were the sources of the data? c) What data collection tools were used? d) How were the data used?

FEEDBACK a) At the Service Environment Level, sources of data include: . Administrative records (e.g. service statistics, financial data)

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 183 . Service delivery point information (e.g. audit information, inventories, facility survey data) . Staff or provider information (performance or competency assessments, training records, staff/provider data, quality of care data) . Client admission registers

Tools for Service Environment Level data include: . Health Service Information Systems . Facility sample surveys . Performance monitoring reports . Facility (Service Delivery Point) records

An important way to monitor data from the service environment over time is through a Health Management Information System (HMIS). A HMIS is a system for ongoing (routine) collection and reporting of data about service delivery. In many countries, this system operates at the national level. Ideally, these routine data are collected from a comprehensive set of service delivery points, and should cover topics such as: . Costs . Births . Mortality . Morbidity . Number of clients seen, referred (inpatient, outpatient) . Numbers of clients by types of services.

TASK 3 – WHAT DATA DOES A DHIS OFFER YOU? a) What routine data do you have access to from the District Health Information System? b) What difficulties have you encountered in accessing this data in the past?

5.3 Individual Level Data

Sources of data and tools at this level include: . Case surveillance (e.g. epidemiology of disease) . Medical records . Interview data . Provider-client interactions (clinical/technical or interpersonal skills)

Tools for collecting individual level data include: . Case reports . Client register analysis . Direct observation

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 184 TASK 4 – IDENTIFY SOURCES AND USES OF INDIVIDUAL LEVEL DATA a) What type of data have you collected at the Individual Level in the past? b) What tools did you use? c) How could this data be used? d) What were the benefits of using this type of data?

FEEDBACK a) An example of Individual Level data are: Data on the number of CHWs who calibrated the scale before weighing children. b) This data can be collected through observations of the actual weighing procedure. c) If the number of CHWs who perform this activity is found to be less than acceptable (e.g. less than our 85% cut off point used for quality of care), then this data can be used to plan retraining of CHWs on this aspect of service delivery. d) The benefits of using this data are that it is readily available at no cost at all or at reasonable cost (for the evaluator’s time).

5.4 Population Level Data

For population level data, here are some of the sources and tools:

Sources . Government census office . Vital registration systems (e.g. birth and death certificates) . Sentinel surveillance systems (routine collection of data at sentinel sites (which may include schools, clinics, shops, that is, places which are easily accessible to the community) . Sample households or individuals . Special population samples (demographic or occupational group, or geographic sector)

Tools . Birth certificates . Household/individual/special surveys . Census forms

TASK 5 – USING POPULATION DATA

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 185 a) What indicators do you think can be constructed using these sources and tools? b) Have you collected population data in the past? What were the sources of data? How did you use that data? c) Give examples of when it would be more useful to use population indicators, and when it would be best to use individual client indicators.

FEEDBACK a) Here are some examples of indicators derived from population level data: . Infant mortality rate . Prevalence of hypertension . % of malnourished children . % of households having access to sanitation c) Population data could be used to show the percentage of the population who are hypertensive, while individual client data will show the number of individuals falling under certain blood pressure levels (e.g. 190/95) or the number of children who are underweight at a certain health facility.

5.5 Spatial or Geographic Level Data

The sources and tools for geographic level data are:

Sources . Satellite imagery and aerial photographs . Digital line graphs and elevation models . Cadastral maps (maps which show land ownership)

Tools . Global Positioning System . Computer software programmes

TASK 6 – DEVELOP INDICATORS FROM GEOGRAPHIC LEVEL DATA a) What examples of indicators could be constructed using these sources and tools? b) Have you used spatial or geographic data in the past? What were the sources of the data? How were the data used? c) If cost were not a concern, how might you imagine using this kind of data to understand how programmes or certain activities are working in a given context?

FEEDBACK a) Examples of indicators using geographic data:

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 186 Geographic data can be used to show high transmission areas for HIV and STIs. It can also be used to illustrate the distance from the clinics of people (mothers and carers) who are not attending growth-monitoring clinics. c) Data can be used to plan interventions: for example, in areas of high transmission of HIV and STDs, education campaigns may be intensified and other appropriate interventions may be planned.

6 COLLECTING DATA

As we noted earlier, a comprehensive data system will have clearly defined indicators, a variety of appropriate data sources and types, baseline and target values for each indicator; and a data collection plan. The data collection plan should attempt to answer this range of questions:

What information is needed to monitor/evaluate the programme? . Who will collect this information? . Where will this information be found? . Who will use this information, i.e. for what purpose will the information be used? . When will the information collected? . What will be the cost of resources needed in terms of personnel, supplies and equipment?

Finally, programme managers might ask how many indicators their programmes should collect. A rule of thumb is one or two indicators per result, but this depends on whether this information will be enough for you to draw a conclusion about the programme. Remember that the purpose of monitoring and evaluation is to monitor performance and evaluate impacts! So do not overdo it. If monitoring and evaluation is not going to feed back into programme management or be used to improve performance, effectiveness, or efficiency, it is not a very sound use of programme resources.

7 SESSION SUMMARY

We have now covered many kinds of data and other information that must be considered in order to construct an appropriate data system for programme evaluation, management and implementation.

You have also become aware that indicators can be collected at different levels of programme implementation, that is, at policy level where data may be aggregated, or at service delivery level where data may be disaggregated according to districts, sub-districts. At each level of data collection, you saw

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 187 the different tools that may be used. It is also necessary to understand the strengths and limitations of available data and information.

In the next session of this unit, we review the selection of data types, and how to ensure their quality.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 188 Unit 4 – Session 6 Selecting Data Types and Ensuring Quality

Introduction

This is the final session of Unit 4; it takes you forward into planning the data collection process, which is still part of preparing your implementation design. So far, you have identified monitoring and evaluation indicators for your programme, and been introduced to sources of data and levels of data that can be collected. We are now going to consider what data to collect. We will also discuss factors that need to be considered to ensure the quality of data collected.

Contents

1 Learning outcomes of this session 2 Readings 3 Qualitative and quantitative data 4 Ensuring data quality 5 Session summary

Timing

There are two tasks in this session and no readings. It should take you about an hour and a half to complete.

1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Explore the uses of qualitative and quantitative data in monitoring and evaluation. . Select appropriate data types for your evaluation. . Assess data quality using a checklist.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 189 2 READINGS

There are no readings in this session.

3 QUALITATIVE AND QUANTITATIVE DATA

Both qualitative and quantitative data are useful for performance monitoring and evaluation. Can you explain the difference between them? Qualitative data are in the form of words, such as descriptions of events, transcripts of interviews, life stories and written documents. On the other hand, quantitative data come from numbers and provide answers to questions such as how much, how many, and to what extent.

The use of qualitative data alone has been criticised by scientists as not incorporating the comprehensive information necessary for understanding causal processes. On the other hand, the use of quantitative data has been criticised because of the cost involved in collecting the data, the difficulty in organising and accurately interpreting the data, and the lack of a uniform set of data across all cases.

If you have been involved in research or data collection processes before, you may have noted that quantitative and qualitative data are often used together to analyse trends. Quantitative data are necessary in tracking trends accurately, for example, to assess the decrease in the percentage of children who are underweight following interventions. Qualitative data are useful in understanding the context in which the trends occurred and in interpreting the quantitative data correctly.

Here is an example of how quantitative and qualitative data collecting processes could be used together. In primary prevention of CVD, your intervention may include increasing the awareness of the community about the importance of exercise. In evaluating such interventions, you would collect quantitative data on the percentage of the population engaging in physical activity. However, on comparing this data with data collected at baseline, you may find that the percentage of the population who engage in physical activity is less or equal to the percentage obtained at baseline. You may then follow up by collecting qualitative data in order to explore barriers to physical activity in this population.

TASK 1 – SELECT APPROPRIATE DATA TYPES FOR YOUR EVALUATION a) Think about the programme you will be evaluating: Identify data needs and sources of data for your programme.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 190 Bear in mind the levels of data needed and issues that have been identified in relation to data selection. It would be useful to do this task with a colleague or fellow student.

Here are some guiding questions to assist you: . Do your indicators suggest the need for qualitative or quantitative data, or a combination of both? . What levels of data will you need? Select from policy or programme level data, service environment level data, client information, population level data or spatial or geographic information. . Bearing in mind the time pressure and cost constraints you face in this programme evaluation, identify possible sources of data and how you will get access to it. . What problems do you think you might encounter in the data collection process?

FEEDBACK

It would be valuable to check your proposed data sources and collection methods with your lecturer or an experienced colleague. Consider the time factor and try not to be over-ambitious.

4 ENSURING DATA QUALITY

The results from programme evaluation are likely to be utilised if the quality of the data collected is good. In addition, if the data collected is poor, it could distort the findings of the evaluation. To ensure data quality, you need to consider the following issues:

ISSUE QUESTION TO ASK ABOUT YOUR DATA Coverage Will my data cover all the elements of interest? Completeness Is a complete set of data needed for each element of interest? Accuracy Have the instruments been tested to ensure validity and reliability? Frequency Are the data collected as frequently as will be needed? Reporting Does the available data reflect the time periods which are of schedule interest? Accessibility Can the data needed from each source be collected or retrieved?

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 191 TASK 2 – ASSESS WHETHER YOUR DATA MEETS THESE CRITERIA

Apply these questions to your data collection plan, and adjust your plan if necessary.

6 SESSION SUMMARY

The two main themes of this session were the processes for selecting appropriate types of data for a given evaluation, and ensuring the quality of the data. We reviewed differing, but sometimes complementary, uses of qualitative and quantitative data, pointing out conditions under which they can be used in combination. We have also presented a set of criteria for assessing the quality of your data.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 192 Unit 4- Session 7 Data Collection

Introduction

Data for evaluation comes from a wide range of sources, and can be collected through an equally wide range of techniques. Weiss (1998) notes that:

“[t]he only limits are the ingenuity and imagination of the researcher.

The most common sources of data in evaluation studies are: (a) informed discussions and interviews with program managers, staff, and clients, (b) observations, (c) formal interviews with clients and staff, (d) written questionnaires to clients and staff, (e) existing, particularly programme records, and (f) available data from other institutions.” (Weiss, 1998: 152)

This session focuses on different means of data collection in the field of evaluation, both quantitative and qualitative. Data collection should not be new to you: some of you may simply need to skim the readings provided.

Your second assignment requires you to collect data either from your own programme, or from a clinic, on factors which influence the utilisation of growth monitoring and promotion services, or from the community on barriers to primary prevention of cardio-vascular diseases. For the last two choices, you have been supplied with a data collection instrument in the form of a questionnaire which you will find in section 4 of this session.

Contents

1 Learning outcomes of this session 2 Readings 3 Different methods of data collection 4 Evaluate data collection instruments 5 Designing data collection instruments 6 Coding your data 7 Session summary 8 References

Timing

This session consists of five readings and five tasks. It should take you about four hours to complete if you have not studied these processes before.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 193 1 LEARNING OUTCOMES OF THIS SESSION

By the end of this session you should be able to:

. Describe methods that can be used to collect monitoring and evaluation data. . Design and use data collection instruments for interviews and focus group discussions. . Discuss steps involved in conducting interviews and focus group discussions.

2 READINGS

There are five readings for this session.

Author/s Publication details CACE (2000). What is an Interview? In Research Methods for Adult Educators. Cape Town: CACE, University of the Western Cape: 44-53.

SCHRU (1991). Ch 4 - Group Techniques: Focus Groups. In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 231-238.

SCHRU (1997). Questionnaire Design Principles. In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 19-38.

SCHRU (1991). Ch 5 - Introduction to Questionnaire In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 123-133.

SCHRU (1991). Chapter 8 - Putting the questionnaire Together. In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 143-150.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 194 3 DIFFERENT METHODS OF DATA COLLECTION

You will recall our brief discussion on two types of data that could be collected in monitoring and evaluating programmes. These include qualitative data such as descriptions of events, transcripts of interviews, life stories and written documents; on the other hand, quantitative data is gathered in the form of numbers and provides answers to questions such as How much? To what extent? and How many? In this session, we will discuss methods of data collection that can be used to collect either qualitative or quantitative data.

Both quantitative and qualitative data are usually needed in monitoring and evaluating programmes; each supports and is complementary to the other. It is also possible to do a quantitative analysis of qualitative data, e.g. 70% of the key informants agreed that services were of poor quality.

Remember that it is important to use a combination of different data collection techniques when undertaking monitoring or evaluation activities. This helps to maximise the quality of data collected and reduce the chance of bias.

TASK 1 – WHAT DO YOU KNOW ABOUT DATA COLLECTION METHODS?

a) Take a piece of paper and write down all the quantitative and qualitative methods of which you are aware. b) Test your own knowledge by writing brief descriptions of the following methods of data collection: - Reviewing records - Focus group discussions - Surveys - Interviews - Direct measurements - Observations

FEEDBACK

a) Here are two lists, although there may be more methods:

QUANTITATIVE METHODS QUALITATIVE METHODS

Administering oral or written interviews. Administering oral or written interviews. Reviewing records, e.g. project Focus group discussions. documents and reports. Ethnographic survey. Population-based surveys. Social mapping. Reviewing medical and financial records. Timelines. Completing forms and tally sheets. Interviewing. Direct measurements (chemical Case studies. analysis). Content analysis.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 195 Observation. Observation. Lot quality assessment.

Here is further information about some of the methods:

3.1 Using Existing Records

Much of the data that you may need for an evaluation is routinely collected as part of programme operations. Examples of records include anthropometric data, information on the incidence and severity of micronutrient deficiencies and malnutrition, and project participation rates. Such routinely collected information is the core of an ongoing monitoring system. It should be noted that information from records may be reviewed periodically. For example, clinic records contain test results of those participants who are suspected of having anaemia. A periodic review of these records may offer information on the incidence of iron deficiency. Other records may also be reviewed regularly, for example, children’s malnutrition status.

Another type of record is secondary data, i.e. statistics and information originally collected for purposes other than a particular programme, or for the routine Health Information System. Examples of secondary data in the nutrition context include national or regional surveys of nutritional status, dietary intake, micronutrient deficiencies, and household income and expenditure. Such information is often available from government offices, donor agencies, non-governmental organisations or research institutions.

While using records can be a fast, inexpensive and convenient way to obtain information, it requires careful inspection of the original collection process, keeping in mind that the validity and reliability of the present findings will rest upon the quality of another’s collection methods. In examining a programme’s conceptual framework and the indicators selected, an initial step is to determine which of these indicators will be available, or can be calculated from records. It will then be necessary to make choices about optimal data collection methods for the remaining indicators. These may be amongst the other data collection methods that follow.

3.2 Surveys

Surveys can cover an entire project population or a representative sample. They can include open or closed ended questions. Open ended questions allow respondents to answer in their own words while closed ended questions require the respondent to select from a set of possible answers or to answer yes or no. Survey data that is derived from open ended questions can give both quantitative and qualitative data. Surveys are a very popular method of collecting data; the reliability of such data depends on the sampling frame, quality of questions and care taken in administering questionnaires.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 196 3.3 Direct Measurements

Direct measurements enable the evaluator to ascertain changes in, for example, nutrition status resulting from a programme. They are often included in surveys and when used programmatically, are likely to be included in project records. A number of specific methods are often employed in direct measurements of nutrition levels including: Anthropometry: ht/age, wt/age, wt/ht, BMI; Biochemical indices: blood analysis, urine and breast milk; Clinical signs of micronutrient deficiencies, e.g. goitre and night blindness.

Direct measurements can be influenced by precision practices, e.g. periodic calibration of scales for anthropometric measurements is necessary.

3.4 Observation

Observation is one method used to assess patterns of time usage or behaviours related to desired outcomes. Observation can also offer valuable insights into the social and physical context of the problem being addressed, or into the use of project inputs. To gather data through observation, the observer needs to develop a good relationship with those being observed, if possible, by spending time living with them, so that they are no longer conscious of the observer’s physical presence. When this happens, the process is called participatory observation. Although it is desirable, it is rarely achieved.

3.5 Key Informant Interviews and Focus Group Discussions

Key informant interviews and focus groups with stakeholders are among the fastest and least expensive of data collection techniques. Focus group participants and key informants should be selected to represent the entire range of stakeholders, in order to yield clear and candid insights. These techniques are particularly effective for projects that attempt to change behaviours, such as health communication campaigns.

A key informant interview involves a face-to-face meeting between a trained interviewer and a person who can provide an overview or big picture of knowledge, attitudes or practices of the target group being monitored or evaluated, e.g. project staff, mothers, school children, or mothers-in-law. However, key informants are not necessarily part of the target population, e.g. a medical specialist presenting views on a community’s disease patterns.

Key informants provide a broad view of the situation and this overview is critical in data collection. Similarly, a focus group discussion involves group dynamics that allow participants to respond to one another’s perceptions, generating new ideas and highlighting conflicting attitudes that might otherwise be inaccessible to an outsider.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 197 Focus group discussions (FGDs) are a qualitative data collection method designed to use group dynamics and the flow of discussion to probe deeply into the images, beliefs and concepts that people have about a particular subject. It is not a group interview but a group discussion focused on a topic. These guided discussions are held with small groups of people who have similar characteristics, for example low-income fathers, mothers of children under two years or traditional healers. The discussions are led by a trained moderator who uses a question guide to introduce the topics of interest and probe for deeper discussion. Focus group discussions can be used to: . focus the evaluation and develop relevant evaluation questions by exploring in greater depth the problem to be investigated and its possible causes; . formulate appropriate questions for more structured larger scale surveys; . supplement information on community knowledge, beliefs, attitudes and behaviour already available but incomplete or unclear; . develop appropriate messages for health education programmes; . explore controversial topics and issues.

Focus group discussions (FGDs) can provide valuable information when they are properly planned and executed. Creating an ideal environment for a focus group discussion requires an appropriate setting which is free of distractions for interaction between the interviewer and stakeholder(s). The following points should be considered when planning for a focus group discussion: . Present the discussion topic to the group in writing as a series of open ended questions. . Ensure that the seating arrangement facilitates communication, e.g. a circle. . Encourage respondents to interact and express their views freely. . Include a maximum of 8 -10 persons per focus group. . Conduct the focus group discussion for no more than an hour and a half. . One person should serve as the facilitator and the other as a recorder. . The facilitator should stimulate and support the discussion. Facilitators, must discuss tape recording and note-taking with participants in advance, making sure that there is agreement. The facilitator should: introduce the session, encourage discussion, encourage involvement, build rapport and empathise, avoid being placed in the role of expert, control the rhythm of the meeting, but in an unobtrusive way, take time at the end of the meeting to summarise, check for agreement and thank the participants. . The recorder should keep a record of the content of the discussion and note the reaction and interaction within the group. . The deliberations of the discussion should either be tape recorded or noted down by a recorder. . Discussions should be continued until saturation is reached (no new information is raised). This means you may have 1 , 2 or 6 focus group discussions as long as participants have new information to share

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 198 . The recorder should note the following: the date, time, venue, names, characteristics of respondents, and description of the group’s dynamics, vocabulary and language use. . At the end of each discussion, the facilitator should play back the recorded information to the participants so as to confirm if the meant the information recorded (confirmability) . Following the discussion, the facilitator and recorder should sit together to review and complete the notes taken during the discussion. . Although key informant interviews and focus groups can provide important contextual information, certain difficulties should be anticipated: . Open-ended questions are difficult to code and analyse but can provide valuable information which cannot be provided by any other method. . It is not always possible to compare this kind of information statistically within and between projects because there are no standard response categories. However, qualitative comparisons such as perceived effectiveness or likely sustainability can still be made. . Interviewers and focus group facilitators need to be experienced staff, who are capable of probing with follow-up questions for answers and eliciting and recording adequately detailed information. . You will find an example of a set of guiding questions for a focus group in section 4.4 later in this session.

3.6 Interviewing

Interviews are probably the most commonly used data collection method for evaluation. Interviewing involves oral questioning of respondents, either individually or as a group. It is suitable for use with illiterate people, and it also permits clarification of questions through probing. Good interviewers should have the following skills and characteristics; they should be: . friendly and warm; . hardworking and reliable; . able to speak the local language; . able to ask questions in a neutral way.

The following two readings discuss some of the practices that could make interviews more successful.

READINGS

CACE, UWC. (2000). What is an Interview? In Research Methods for Adult Educators. Cape Town: UWC: 44-53.

(1991). Ch 4 - Group Techniques: Focus Groups. In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 231-238.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 199 To enable interviewers to develop these skills and characteristics, they need to be trained in all of the following: . How to put the interviewee at ease; . How to raise sensitive issues; . How to probe; . How to accurately record responses, particularly for open ended questions; . How to edit and check for errors and omissions.

Evaluators should supervise interviewers in order to ensure quality data from interviews.

4 EVALUATE DATA COLLECTION INSTRUMENTS

What do we mean by the term data collection instrument? “An instrument is a means used to measure or study a person, event, or other object[s] of interest.” (Weiss, 1998: 332) The types of instruments we will be discussing are questionnaires, interview schedules, focus group discussion guidelines and observation checklists, although tests are also data collection instruments.

To develop good questions for any data collection instrument, you need to consider: . The objectives of the study; . The information that you need from the indicators you have identified; . The information you need to answer the evaluation questions; . The wording or phrasing of the questions.

There are three types of questions that can be asked in an instrument - open ended or unstructured questions, closed ended or structured questions and semi-structured questions. As we explained above, open ended questions allow respondents to answer in their own words while closed ended questions require the respondent to select from a given set of possible answers or to answer yes or no.

TASK 2 – DEVELOP DIFFERENT TYPES OF QUESTIONS

Develop three questions: closed, open and semi-structured, for your own instrument.

FEEDBACK

Compare your questions with the ones given in the table.

Here are three examples.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 200 OPEN ENDED QUESTIONS CLOSED ENDED QUESTIONSSEMI-STRUCTURED QUESTIONS 1. What is good about 1. Do you have any children? 1. Is the respondent being overweight? Yes --- No --- a. The mother? b. The caregiver? c. Other? 2. What do you like about red 2. Do you eat red meat? 2. How many time a week do meat? Yes --- No --- you eat red meat? a. I do not eat red meat b. 1-2 times a week c. 3-4 times a week d. > 5 times a week

A data collection instrument needs to be designed to fulfil its particular task. Certain features can lead to an effective, or a poor instrument. Take a look at the list below:

AN EFFECTIVE INSTRUMENT: A POOR INSTRUMENT:

. Is not too long; . Is too long; . Is appropriate for the target group; . Is inappropriate for the target group; . Has focused questions; . Has vague questions; . Avoids leading questions*; . Contains leading questions; . Ensures that all questions are relevant to . Has questions which are irrelevant to the the study; study; . measures only one idea per question; . Measures more than one idea per . Is user-friendly and gender friendly; question; . Is well formatted; . Is not user or gender friendly; . Uses simple language. . Is not well formatted. . Uses difficult terms.

* Do you understand the concept of a leading question? It is one which pre- empts the respondent’s answer, e.g. Do you think exercise is good for adults? Rather ask: What do you feel about exercise for adults?

Here is one of the data collection tools for your Assignment 2. You will be evaluating it using the criteria listed above, for an effective data collection tool.

4.1 DATA COLLECTION TOOL FOR UTILISATION OF GROWTH MONITORING SERVICES

Here is a data collection tool you may wish to use for your assignment (Option 1). You are expected to collect data with it from at least 20 suitable respondents, present your data using diagrams, report your findings, interpret and evaluate them and make recommendations. Please be aware that you may adapt or improve on this data collection tool, but please present your revised tool in the assignment.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 201 Introduction

I am ………………, a student studying for the Postgraduate Diploma/Masters in Public

Health at the University of the Western Cape. I am gathering information from mothers of children between the ages of 0-2 years. I am trying to find out what issues are associated with bringing children to the clinic for growth monitoring and promotion. I would like to ask you some questions which will take about 20 minutes of your time. You may choose not to participate in this study. Whatever information you give me will not affect the care you receive from the clinic, and will not be given to anyone else except for improving the programme. The information collected will help us improve the utilisation of growth monitoring services in this district.

Do I have your permission to continue with questions? Yes No

Signature: ______

Date:______

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 202 1. Date of Interview: ______Respondent Number: ______

2. Is the respondent a. The mother? b. The caregiver? c. Other? Specify: ______

3. Is the caregiver/mother employed? a. Yes b. No

4. How much schooling has the mother had?

a) No schooling b) < Grade 3 c) Grade 3 – Grade 6 d) Grade 7 – Grade 9 e) > Grade 10

5. Age of the child a) 0-6months b) 7-12 months c) 13-18 months d) < 18 months

6. Did anyone explain to you the importance of bringing the child for

growth monitoring and promotion? a) Yes b) No

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 203 7. Do you know what the horizontal axis of The Road to Health card

means? a) Yes b) No

8. Do you know what the vertical axis of The Road To Health card means? a) Yes b) No

9. Did the child have all required immunisations according to age? a) Yes b) No

10. How long does it take you to get to the clinic? a) 30 minutes or less b) More than 30 minutes but less than one hour c) At least one hour but less than two hours d) Two hours or more

11. How do you come to the clinic? a) By walking b) By bus/taxi c) Other (specify): ______

12. If you travel by bus, how much do you pay?:______

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 204 TASK 3 – EVALUATE A QUESTIONNAIRE

Evaluate the questionnaire above in terms of the features mentioned before, and improve it where you can.

FEEDBACK

Here are some suggested improvements:

The respondents may not understand the terms vertical and horizontal axis. So you could say:

7. Do you know what this information (point to the horizontal axis of The Road to Health card) means?

The same applies to question 8.

In question 9. Did the child have all required immunisations according to age? the mother may not know the immunisation required according to the age of the child.

The questionnaire could be rephrased as follows: 9. Have you been bringing the child to the clinic for the required immunisations? ٱ a) Yes

ٱ b) No

10. If no, why not?

Question 12 could be rephrased to:

12. If you travel by bus or taxi, how much do you pay?

Here is the second tool, which you may choose to improve and use in Assignment 2.

4.2 DATA COLLECTION TOOL ON KNOWLEDGE AND ATTITUDES OF CHWS ABOUT HYPERTENSION AND DIABETES

Here is the alternative data collection tool, which you may wish to use for your assignment (Option 2). You are expected to collect data with it from at least 20 suitable respondents, present your data using diagrams, report your findings, interpret and evaluate them and make recommendations. Please be

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 205 aware that you may adapt or improve on this data collection tool, but please present your revised tool in the assignment. Introduction

I am ………………, a student studying for the Postgraduate Diploma/Masters in Public

Health at the University of the Western Cape. I am gathering information from community health workers about their knowledge about diabetes and hypertension. I would like to ask you some questions, which will take about 20 minutes of your time. You may choose not to participate in this study if you prefer to, and this will not change your status in the community. The information you give will not be given to anyone else except for planning interventions for prevention of cardiovascular diseases.

Do I have your permission to continue with questions? Yes No

Signature: ______Date:

______

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 206 KNOWLEDGE AND ATTITUDES OF CHWs ABOUT DIABETES AND HYPERTENSION Study number 4 1. Name of the respondent:…………………………………………. 2. Date of interview D D M MY Y Y Y 2 0 2 0 14 3. Sex: 1 Male 2 Female 4. Age at last birthday: ………… 5. What is the highest standard that you have passed at school? No Schooling 1 Standard 5 and below 2 Standard 6 up to standard 8 3 Standard 9 up to standard 10 4 Tertiary education/diploma 5

6. Are there any soccer fields or playing grounds in you area?

1= Yes 2 = No

7. Have you ever received training regarding: Diabetes 1 Yes 2 No High blood 1 Yes 2 No pressure

8. Do you have any close relatives such as your (father, mother, sister, aunt) suffering from the following diseases:

Diabetes 1 Yes 2 No High blood 1 Yes 2 No pressure

9. Have you ever been told that you suffer from the following diseases?

High blood pressure 1 Yes 2 No 3 Don’t know Diabetes 1 Yes 2 No 3 Don’t Know Stroke 1 Yes 2 No 3 Don’t Know Heart disease 1 Yes 2 No 3 Don’t Know Arthritis 1 Yes 2 No 3 Don’t know Kidney diseases 1 Yes 2 No 3 Don’t Know Osteoporosis 1 Yes 2 No 3 Don’t Know

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 207 10. What do you think is the cause of diabetes?

Too much sugar 1 Mentioned 2 Not mentioned 3 Don’t know Too much starch 1 Mentioned 2 Not mentioned 3 Don’t know Too much fat in the diet 1 Mentioned 2 Not mentioned 3 Don’t know Physical inactivity 1 Mentioned 2 Not mentioned 3 Don’t know To be rich 1 Mentioned 2 Not Mentioned 3 Don’t know Inheritance 1 Mentioned 2 Not Mentioned 3 Don’t know Malfunctioning pancreas 1 Mentioned 2 Not Mentioned 3 Don’t know Other

11. What do you think are the causes of high blood pressure? Too much salt in the diet 1 Mentioned 2 Not mentioned 3 Don’t know Too much starch 1 Mentioned 2 Not mentioned 3 Don’t know Too much fat in the diet 1 Mentioned 2 Not mentioned 3 Don’t know Physical inactivity 1 Mentioned 2 Not mentioned 3 Don’t know To be rich 1 Mentioned 2 Not mentioned 3 Don’t know Inheritance 1 Mentioned 2 Not mentioned 3 Don’t know Other 12. What parts of the body are affected by Diabetes? Heart 1 Mentioned 2 Not mentioned 3 Don’t know Lungs 1 Mentioned 2 Not Mentioned 3 Don’t know Liver 1 Mentioned 2 Not mentioned 3 Don’t know Kidneys 1 Mentioned 2 Not mentioned 3 Don’t know Pancreas 1 Mentioned 2 Not mentioned 3 Don’t know Other 13. What parts of the body are affected by high blood pressure? Heart 1 Mentioned 2 Not mentioned 3 Don’t know Lungs 1 Mentioned 2 Not mentioned 3 Don’t know Liver 1 Mentioned 2 Not mentioned 3 Don’t know Kidneys 1 Mentioned 2 Not mentioned 3 Don’t know High blood pressure 1 Mentioned 2 Not mentioned 3 Don’t know Other

14. What Lifestyle advice do you give to people with high blood pressure?

Reduce salt I your diet 1 Mentioned 2 Not Mentioned 3 Don’t know Reduce fat intake 1 Mentioned 2 Not Mentioned 3 Don’t know Do not smoke 1 Mentioned 2 Not Mentioned 3 Don’t know Exercise regularly 1 Mentioned 2 Not Mentioned 3 Don’t know Take time to relax 1 Mentioned 2 Not Mentioned 3 Don’t know Other

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 208 15. What healthy advice do you give to people suffering from diabetes?

Do not eat sugar 1 Mentioned 2 Not mentioned 3 Don’t know Reduce fat intake 1 Mentioned 2 Not mentioned 3 Don’t know Reduce the amount of food intake 1 Mentioned 2 Not mentioned 3 Don’t know Eat lots of vegetables 1 Mentioned 2 Not mentioned 3 Don’t know Reduce the amount of starch in 1 Mentioned 2 Not mentioned 3 Don’t know your food Do not drink alcohol 1 Mentioned 2 Not mentioned 3 Don’t know Other

16. According to your knowledge, what should people with diabetes do to keep healthy?

(Do not probe.)

Be of the correct weight 1 Mentioned 2 Not mentioned Do not use sugar 1 Mentioned 2 Not mentioned Reduce fat intake in diet 1 Mentioned 2 Not mentioned Eat lots of vegetables 1 Mentioned 2 Not mentioned Reduce alcohol intake 1 Mentioned 2 Not Mentioned Use traditional medicine 1 Mentioned 2 Not mentioned Use herbal mixtures 1 Mentioned 2 Not mentioned Engage in exercises 1 Mentioned 2 Not mentioned Other---- Explain

17. According to your knowledge, what should people with high blood pressure do to stay healthy?

(Do not probe answers.)

Be of correct weight 1 Mentioned 2 Not Mentioned Eat less fats 1 Mentioned 2 Not mentioned Eat less fats 1 Mentioned 2 Not mentioned Reduce alcohol intake 1 Mentioned 2 Not mentioned Drink traditional medicine 1 Mentioned 2 Not mentioned Take herbal mixtures 1 Mentioned 2 Not mentioned Take home brewed beer 1 Mentioned 2 Not mentioned Regular exercises 1 Mentioned 2 Not mentioned Stop smoking if they smoke 1 Mentioned 2 Not mentioned They do not do anything 1 Mentioned 2 Not mentioned Other

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 209 4.3 DATA COLLECTION TOOL FOR IMPROVING PRIMARY PREVENTION OF CARDIOVASCULAR DISEASES

Here is the alternative data collection tool, which you may wish to use for your assignment. You are expected to collect data with it from at least 20 suitable respondents, present your data using diagrams, report your findings, interpret and evaluate them and make recommendations. Please be aware that you may adapt or improve on this data collection tool, but please present your revised tool in the assignment.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 210 Introduction

I am ………………, a student studying for the Postgraduate Diploma/Masters in Public

Health at the University of the Western Cape. I am gathering information from community members between the ages of 15 and 65 years. I am trying to find out what people in this community do to avoid cardiovascular diseases. I would like to ask you some questions, which will take about 20 minutes of your time. You may choose not to participate in this study if you prefer to, and this will not change your status in the community. The information you give will not be given to anyone else except for planning intervention for prevention of cardiovascular diseases.

Do I have your permission to continue with questions? Yes No

Signature: ______Date:

______

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 211 1. Date of Interview: ______Respondent Number: ______

2. Age at your last birthday ______

3. Sex

a) Male

b) Female

4. What is the highest standard that you have passed at school?

a) No schooling

b) Standard 5 and below

c) Standard 6 up to standard 8

d) Standard 9 up to standard 10

e) Tertiary education/diploma

5. Are there any soccer fields or playing grounds in your area?

a) Yes

b) No

6. If yes, who uses them? ______

7. How many times per week do you take part in exercises (in which you

find yourself running out of breath and sweating)?

a) Never

b) Once a week

c) 2-3 times a week

d) 4-5 times a week

e) More than 5 times a week

8. What are some of the things that prevent you from exercising or doing

physical activity?

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 212 a) Lack of exercise facilities b) Lack of time c) Laziness d) Pains associated with exercise e) I work very hard and need to rest after work f) Fear of losing weight

9. Do you smoke? a) Yes b) No

10. Is alcohol a problem in your community? a) Yes b) No

11. If yes, what efforts are taken to prevent abuse of alcohol?

12. Who are the people who drink alcohol the most in your community? a) Young people b) Unemployed people c) Women d) Men e) Both men and women

13. When you have chicken, is it fried? a) Sometimes b) Always c) Not at all d) I never have chicken

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 213 14. When you have other meat, is it fried? a) Sometimes b) Always c) Not at all d) I never have other meats

15. When you have food such as potatoes, are they fried? a) Sometimes b) Always c) Not at all d) I never have potatoes

16. Do you add salt to cooked food? a) Yes, lots of salt b) Yes, a small amount of salt c) No d) Yes, even before having tasted food

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 214 Lastly, we provide an example of a set of guiding questions for a focus group discussion.

4.4 GUIDING QUESTIONS FOR A FOCUS GROUP DISCUSSION ON THE MEANING OF FOOD TO THEM

A focus group discussion, held with 8-10 men and women grouped by sex, discussed the meaning of food and explored controversial issues.

The main question discussed was: What does food mean to you? For what purposes is food used in your family and in your community?

A checklist of semi-structured questions for prompting the discussion was used to make sure that all participants covered the following information at some stage. . Meaning of food in families. . Meaning of food in the community. . Values attached to food. . Situation in which food is used both in the families and in the community.

In the next section, we focus on the design of data collection instruments.

5 DESIGNING DATA COLLECTION INSTRUMENTS

What steps should you follow in designing a questionnaire or a focus group discussion guide? Here is a set of guidelines:

. Think about the contents of the questionnaire by considering the objectives and indicators. . Formulate one or more questions that will provide information needed for each indicator. . Check whether each question measures one thing at a time. . Avoid leading questions. . Formulate control questions to cross check responses on difficult questions. . Avoid words with double or vaguely defined meanings and emotionally leading words. . Sequence the questions. . Use respondent friendly, simple language and make the questionnaire as short as possible. . Include an introduction, which explains the focus of the instrument. . Format the questionnaire layout, its spacing and make it easy to code later. . Translate it if necessary.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 215 Study these readings and use them to complete Task 4.

READINGS

SCHRU. (1997). Questionnaire Design Principles. In Planning Healthy Communities Bedford Park, Australia: SCHRU: 19-38.

SCHRU. (1991). Ch 5 - Introduction to Questionnaire In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 123-133.

SCHRU. (1991). Ch 8 - Putting the Questionnaire Together. In Planning Healthy Communities. Bedford Park, Australia: SCHRU: 143-150.

TASK 4 – DEVELOP YOUR OWN QUESTIONNAIRE OR SET OF FOCUS GROUP QUESTIONS

Develop a data collection instrument for your own evaluation programme.

FEEDBACK

We cannot give you feedback here, but send it in for feedback by email or fax with the following title:

Draft M & E Assignment 2 - Data Collection Instrument for Feedback (Att: T. Puoane). We will try to give you feedback as soon as possible.

Before you try out your instrument, do Task 5.

TASK 5 – EMPATHISE WITH YOUR RESPONDENTS a) How can one make interviews less threatening for the respondents?

FEEDBACK

Firstly, interviews need to be sensitively handled to avoid doing anything culturally unacceptable or embarrassing. Secondly they need to be gender sensitive. For example, it may be improper for a man to interview a married woman on his own in certain cultures. Furthermore, people are usually uncomfortable in giving personal details to strangers. It is necessary to use local people or be accompanied by someone who is well known and respected in the community while assuring respondents about confidentiality.

Note also that if you plan to write answers on a questionnaire or use a tape recorder during an interview or focus group discussion, you should first explain this to the respondent/s and ask for permission to do so.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 216 5.1 Pre-Testing Your Instrument

Once the instrument has been developed, there is a need to pre-test and revise it. The purpose is to identify any problems or weaknesses in the instrument or in the wording. Pre-tests should be conducted in a population similar to that of the programme and under similar field conditions. In pre- testing, your aim is to check on the following:

. The level of understanding of the questions by the respondent. . Understanding of the language of the questionnaire. . The ease with which the instrument is administered. . The adequacy of instructions for the interviewer. . The level of clarity for training interviewers. . The sensitivity of questions for either the interviewers or respondents. . The adequacy of recording space for open-ended questions. . The length of time it takes to administer the instrument.

Once you have collected your data, you need to code your data.

5 CODING YOUR DATA

After you have collected your data, you will need to code it, or assign a tag or identifier to responses of both closed and open ended questions.

Weiss (1998) explains it as follows: “[Coding is t]he process of organizing data into sets of categories to capture the meaning or main themes in the data. Coding is mainly done in the analysis of quantitative data, but qualitative data can also be grouped into code categories.” (Weiss, 1998: 328)

In the case of the questionnaire 4.2 above, you will need to count how many males and how many females responded to the questionnaire. You could code using numeric values, such as 1 for male and 2 for females, or alternatively, use M for males and F for females and count them up afterwards.

Each code should be specific for that question and remains the same for all respondents for that question. It is essential both for data entry and interpretation that codes be kept consistent throughout the data collection instrument and for the set of collection instruments. For example, if an answer of Yes =1 and No = 2 for question number 1, then Yes should remain 1 for all remaining questions and in other instruments which are part of the evaluation. It is important that the meanings of all codes be recorded in a single place such as a data coding book. It is also always important to have a backup copy.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 217 7 SESSION SUMMARY

In this session, we have gone through a range of techniques that can be used to collect monitoring and evaluation data. We have pointed out some important points in the development of data collection instruments, with particular attention to developing a questionnaire and focus group discussion guidelines.

You are now ready to go to the field to collect your data. We hope that this session will assist you in completing your second assignment.

This is the end of Unit 4. In the course of this unit you have been guided through the process of selecting indicators for a health/development programme. You have hopefully realised that the process of selecting indicators begins with identifying programme objectives and the activities needed to achieve the objectives. The next step is to decide what measurable results will indicate that the programme is effective in achieving the desired outcome. These are your indicators.

8 REFERENCES

. Weiss, C. H. (1998). Evaluation. New Jersey: Prentice Hall.

SOPH, UWC, Master of Public Health: Monitoring and Evaluation of Health & Development Programmes - Unit 4 218

Recommended publications