INCOMPLETE DRAFT FOR DISCUSSION ONLY; COMMENTS WELCOME NOT FOR QUOTATION OR GENERAL CIRCULATION Stakeholder engagement for development impact and learning

Sandy Oliver,1 Nancy Cartwright,2,7 Mukdarut Bangpan,1 Kelly Dickson,1 David Gough,1,7 James Hargreaves,3 Kirrily Pells4 Chris Roche,5 Ruth Stewart,1,6 1EPPI-Centre, UCL Department of Social Science; 2University of Durham; 3 LSHTM; 4Social Science Research Unit, UCL Department of Social Science; 5La Trobe University, Melbourne; 6Africa Centre for Evidence, University of Johannesburg; 7CEDIL ILT. This is an interim version of a paper in progress with gaps and limitations in the content and interpretations. It is made publicly available in order to invite feedback either via the CEDIL website or via discussions at the CEDIL Inception Paper Launch in on Thursday 25 January 2018.

1

Table of Contents Abstract ...... 3 1) Introduction ...... 4 2) Methods of enquiry ...... 5 3) Differing perspectives on research and social development ...... 7 4) Models of stakeholder engagement with research findings ...... 10 a. Making decisions with global evidence ...... 11 b. Making decisions with global and local evidence ...... 13 5) Models of stakeholder engagement with research production ...... 14 a. Producing generalisable evidence for common problems ...... 15 b. Producing tailored evidence for specific decisions ...... 16 6) Lessons from political science, behaviour change theory and social psychology ...... 18 a. The politics of engagement ...... 18 b. Behavioural change and engagement ...... 20 c. Social psychology of engagement ...... 21 d. Summary of findings ...... 23 e. Strengths and limitations ...... 23 f. Recommendations for impact evaluations...... 23 g. Recommendations for systematic reviews...... 23 h. Recommendations for development programmes ...... 23 i. Research priorities addressing stakeholder engagement ...... 23 Acknowledgements ...... 23 References ...... 23 Appendix 1: Research Contributions Frameworks outcomes chain (Morton, 2015a) ...... 29 Appendix 2: Guidance for stakeholder engagement ...... 30

2

Abstract This paper explores stakeholder engagement with research evidence. A theoretically driven search strategy identified an extensive literature of stakeholder engagement with both the research process (to make research more relevant to stakeholders) and research findings. Methods and models for stakeholder engagement were analysed within a framework the resonated with what we know about evidence synthesis and adaptive decision-making, and wider theory and empirical evidence.

Effective stakeholder engagement with research findings includes: facilitating access to research evidence, building decision-makers’ skills to access and make sense of evidence, and fostering changes to decision-making structures and processes. Further understanding of stakeholder engagement and promising ways forward also come from collective reflection, synthesising qualitative research, political science, the science of behaviour change and social psychology.

Overall, methods and tools are more developed for engaging stakeholders with research findings than with the research process. They are better developed for stakeholders making sense of evidence addressing relatively simple questions than for making sense of more complex questions. Nor are methods and tools available to support stakeholders making sense of the findings from more than one source at a time.

3

1) Introduction

The primary purpose of this paper is to help research teams, particularly those commissioned within the CEDIL programme, to choose appropriate models for engaging stakeholders with research in order both to encourage use of research findings by policy makers, programme managers and practitioners and to help them use research evidence more effectively. It does this by collating the relevant literature on engagement for impact evaluations and evidence synthesis for international development, arguing how to choose between existing models of engagement and signposting appropriate guidance.

This paper adopts a fundamental principle that sound use of research findings involves, wherever possible, drawing on the findings of multiple studies brought together systematically in an evidence synthesis. The potential benefits of such an approach come from drawing on more evidence than a single study and on evidence that allows greater consideration of variation in populations, interventions and contexts, as well as variation in the quality of studies. The confidence that can be placed in that evidence depends upon the rigour and relevance of the evidence synthesis and the rigour and relevance of its included studies. Opportunities to judge or enhance this rigour and relevance therefore include: stakeholder engagement with the focus and questions addressed by evidence syntheses, their framing, methods and findings; and stakeholder engagement with the focus and questions addressed by primary impact evaluations, their framing, methods and findings.

This paper therefore considers engagement with stakeholders (anyone who contributes to or may be affected by the work of an organisation) throughout the life cycle of impact evaluations and evidence synthesis, from deciding the issue that needs addressing and what research might help to making use of the findings. At any of these stages, the purpose of stakeholder engagement within this scope is to promote the better use of study findings either by (a) delivering research to meet the needs of policy makers, programme managers or practitioners both in terms of content and format; or by (b) providing the means for policy makers, programme managers or practitioners to find research that meets their needs.

Making sense of the diversity of stakeholder engagement within this scope is essential for choosing appropriate models for particular circumstances. How stakeholder engagement happens differs depending on: the nature of the task; which researchers and stakeholders engaged, and their motivations; how they are brought together; how their discussions and decision-making are supported; and how they interact with the research and each other (Oliver et al 2014). This raises the question of which models and tools for engaging stakeholders for impact evaluations and evidence synthesis are widely applicable and which suit particular tasks and particular stakeholder groups and their motivations. It will then focus on collating and enhancing existing guidance for study teams to engage stakeholders in their on-going work and guidance for stakeholders on what they should be requesting of study teams, and to disseminate study outputs in ways to make their findings readily useable by policy makers, programme managers or practitioners. The backdrop to this paper is the interface between research knowledge and knowledge held by policy makers, programme managers and practitioners. In reality there are multiple interfaces and multiple forms of knowledge flowing through multiple routes, but to initiate our analysis we consider this area as having a single interface that is approached from different perspectives. Researchers approach this interface with their roles, responsibilities and skills framed largely in terms of theories, research methodologies and empirical findings. Policy makers, programme managers and practitioners approach this interface with their disparate roles, responsibilities and skills framed largely in terms of decisions they face about how to improve lives in terms of health,

4

welfare, education, infrastructure, governance or many other ways. Knowledge may flow from research findings, and judgements about the confidence that can be placed in them, to inform decisions, or from policy teams, programme managers or practitioners to inform the research agenda.

We assume that, fundamental to these knowledge flows is formal synthesis of research findings from multiple studies. The purpose is to avoid the risks of basing decisions on single studies which might have been too small or set in a context dissimilar to the context of subsequent decision- making, or on studies that by chance offer spurious findings. Considerable enthusiasm for aggregating the findings of similar studies to inform decisions about professional practice emerged first in the 1980s in medicine, in the global north. This movement spread from medicines to other professions and policy sectors, and from the global north to the global south. As it spread it met new challenges arising from differences in types of decisions being made (policy and personal decisions, not just professional practice) and arising from differences in the literature available (more social issues not just physiological or technical issues). Nursing emphasised the value of qualitative research for learning about ‘caring’, not only ‘curing’. Health promotion emphasised the value of process evaluations to learn about the feasibility, acceptability and fidelity of programmes. International developed emphasised the analysis of long causal pathways. Although the knowledge needs varied, and the synthesis methods evolved to meet these needs, the underpinning principles of taking into account variability in content and quality of studies were unchanged.

Commonly noted barriers to the flow of knowledge from research to stakeholders are research being either inaccessible, irrelevant or untimely for decision-makings (Lavis et al 2003; Oliver K et al 2014; Langer et al 2016). Making research accessible, relevant and timely requires knowledge to flow from stakeholders to researchers too. The focus of this paper is models of stakeholder engagement to facilitate these complementary knowledge flows: the models available, their suitability in many different circumstances, and the guidance available for their practical application. Given the infinite circumstances in which impact evaluations or evidence synthesis are produced or their findings used, and the vast array of stakeholder engagement models available, this paper describes the key dimensions that distinguish the perspectives of research and other professional groups on social development or aid programmes in low and middle income countries. It distinguishes: research production and research use; and knowledge available or required for widespread issues and for local issues. It then considers various models of stakeholder engagement, making clear where and why they suit particular organisational or on-the- ground contexts, before highlighting gaps in current knowledge about stakeholder engagement for development impact and learning. Finally it signposts guidance publicly available for different ways of engaging stakeholders.

2) Methods of enquiry Given the breadth of decision-making by policy makers, programme managers and practitioners across social development and aid programmes, methods and authorship for this paper was chosen to span a broad range of academic disciplines.

Between them the authors brought experience in childhood studies, crime, education, epidemiology, geography, global health, health systems, philosophy, politics, psychology, social care and sociology). Each of these authors brought experience of working in two or more academic disciplines. Their current interests include: children’s rights, conflict and peacebuilding, gender and rights, health inequalities, HIV/AIDS, mental health, philosophy of science, research methodology for impact evaluations in developing countries and evidence synthesis, and spanning boundaries between research and policy or practice, or between academic disciplines. They also brought a track

5

record in different traditions of stakeholder engagement spanning: action research with donors, field research in Africa, policy engagement in Africa and Asia, capacity building for research and research use, participatory impact evaluation, politics of evidence and results, politics of gender, and research priority setting.

Bearing in mind the broad range of stakeholder traditions and the extensive literature available in many of these areas the initial focus was on gaining an overview by drawing on existing evidence syntheses. Searching began with sources rich in evidence syntheses for developing countries: 3ie database of systematic reviews; 3ie Evidence Gap Maps; International Rescue Committee Evidence Maps.

To complement the research literatures we sought guidance available from leading organisations. We crowd sourced evidence and guidance through two series of events in the week 6 – 12 November 2017. These were convened as Humanitarian Evidence Week by Evidence Aid1 and as London Evidence Week by 3ie.2 The results of these searches were supplemented by evidence syntheses and seminal papers identified by the authors in their areas of expertise. We crowdsourced guidance through Twitter using #GlobalDev and a handle with a track record for tweeting about evidence, engagement and international development. We worked closely with DFID’s Evidence and Learning Facility for South Sudan to consider the applicability of models for fragile and post conflict states, not only developing countries more widely.

Early searches identified existing literatures about models and tools for stakeholder engagement that have been synthesised in systematic or critical reviews which focus on: setting priorities for research (McGregor et al 2014); knowledge brokering for guiding studies and for making use of their findings (Bornbaum et al 2015; Kislov et al 2017) managers and practitioners as partners in research (Olivier et al 2016), disseminating research findings (Wilson et al 2010), the science of science use (Langer et al 2016), combining evidence for decisions from mixed sources (Koon et al 2013), and syntheses that underline the importance of politics, and the political and institutional context, in contributing to the likelihood of research uptake. This includes for example health policy (Liverani et al, 2013), nutrition policy (Cullerton et al, 2016), transport policy (Sager, 2007), and low carbon technology policy (Auld et al 2014). They variously focused on the whole field (science of science use), specific parts of the process (from agenda setting to use), or various actors (knowledge brokers, managers and practitioners). Between them, they included research addressing (inter)personal or organisational engagement with evidence, and behavioural science, communication science, and political science to understand or influence the actions of individuals and organisations, and understand the significance of the wider context. Recognising this diversity of focus prompted the need for an overarching framework that would accommodate various models, perspectives and contexts to make sense of the available literature. Such a framework was constructed by combining two frameworks: one addressing evidence synthesis (Oliver and Dickson 2016) and the other addressing adaptive decision making (Institute of Development Studies (2015). The adoption and adaptation of these frameworks is described in section 3.

Research findings and guidance for stakeholder engagement were then presented within this framework.

1 http://www.evidenceaid.org/events-and-training/hew/ 2 http://www.3ieimpact.org/en/events/3ie-conferences-and-workshops/london-evidence-week-2017/

6

3) Differing perspectives on research and social development Researchers and stakeholders working for social development and aid programmes differ in terms of their priorities, values, language and time scales. Impact researchers, whether conducting primary impact evaluations or systematic reviews, tend to emphasise the rigour in study design and, when aggregating the findings of similar evaluations they emphasise the generalisability of study findings. In contrast, stakeholders with an interest in interventions tend to emphasise the relevance and appropriateness of potential solutions to their local problems. This paper is based on the assumption that better research and better decisions emerge when these different perspectives are brought closer together to enhance both evidence-informed policy and policy-informed evidence. Early considerations of getting research into practice saw this as a linear process, with knowledge flowing from primary studies to syntheses of effects to inform policy and practice. More realistic is a system where knowledge flows in both directions (evidence-informed policy and policy-informed evidence), and may involve some co-construction of ideas and evidence. Knowledge flows in both directions when researchers ask potential decision-makers to comment on their work in progress, particularly if new evidence is not readily available as a synthesis, decision-makers may demand it and in doing so have a significant say in the form that it is synthesised. A simple, linear model is also necessarily expanded when problems and possible responses are not clearly understood before the evidence is synthesised or when problems and appropriate responses vary across different contexts. Expanding the linear model to encompass a two-way flow of knowledge and take into account variations in understanding and context remains a fairly instrumentalist approach. Here we describe how this expanded model offers a useful starting point for investigating stakeholder engagement to promote the use of study findings even though others offer more complex and nuanced accounts from political science, policy studies, and sociology of knowledge (Boswell and Smith 2017).

Policy makers, programme managers and practitioners in social development work in a context that is subject to considerable variation and change. They are required to make decisions, whether research findings are available or not, and their decisions may be informed by politics, economics, social norms, organisational structures and procedures, workforce skills and practical experience. Top down standards and accountability systems need to be balanced with bottom up local knowledge and experimentation. Organisational management has traditionally been portrayed as a top down, linear process characterised by standardisation and control to manage the planning and execution of repeatable tasks (Taylor 1911). Natsios (2010) noted the tensions that arise between the compliance this requires of aid programmes with the knowledge held by programme implementers. These tensions are less apparent where a less structured approach has evolved, in business (Sullivan 2011) and natural resource management (Plummer et al 2012; Hasselman 2017), of adaptive management through iteration and change, where change is emergent, contextual and relies on organisations having capacities and processes to generate novelty in day-to-day performance. This comparison was made by Ben Ramalingam at a workshop convened by USAID, mSTAR and the Institute for Development Studies (IDS) in London on 27-28 October 2015. Workshop discussion developed a framework, later refined by Green (2015), to accommodate both approaches. The left hand matrix of Figure 1 illustrates how this framework relates to global knowledge offering evidence of causality and the knowledge of local policy makers or practitioners who are familiar local contexts and trends.

Decisions are often made on the basis of global knowledge, for instance, when producing vaccines, bednets to protect against malaria, or cookstoves to prevent accidents and indoor pollution, linear programming is appropriate (cell A). Indeed such evidence underpins international guidance addressing many infectious diseases and other problems. However, even when global evidence is

7

available, effects may be less predictable because of the inherent variability of social interventions (e.g. cash transfer or the delivery of technical or medical interventions such as vaccines, bednets or cookstoves), because the context is not sufficiently understood, or sufficiently stable (e.g. following humanitarian crises) (cell B). Some decisions are made in the absence of global evidence about causality. When the context is familiar and understood, this understanding can inform learning by doing, trial and error, or problem driven iterative adaptation, alternative approaches that may vary in their formality (cell C). When both causality and context are unknown (cell D) the first step is to learn about both from local populations, by capturing their insights about factors influencing their lives, and recognising exceptional instances of coping with challenging circumstances (positive deviance). The term originally used for a similar matrix, both in the workshop where it emerged and in Green’s blog (2015) was ‘adaptive management’. We have chosen the term ‘adaptive decisions’ to explicitly broaden the scope to encompass policy, management and practice decisions. Thus the left hand matrix of figure 1 represents the varied landscape of decision-making for social development or aid programmes and, within this, cells A and B provide the backdrop to diverse models for stakeholders to engage with research findings. Cells C and D, where global evidence is absent, are beyond the scope of this paper although there is systematic review evidence addressing community engagement in governance of public services (Lynch et al 2013; Westhorp et al 2014) or capturing local knowledge in humanitarian settings (Humanitarian Leadership Academy 2017).

The adaptive decisions matrix complements well an existing framework that accommodates the varied landscape of evidence synthesis (Oliver and Dickson 2016) presented on the right hand side of Figure 1. This second framework distinguishes how different approaches to synthesising evidence systematically depend on whether the purpose is to produce evidence for multiple audiences and whether the literature available already offers key concepts that are clear and widely agreed. Where key concepts are clear and widely agreed before preparing a synthesis as a public good, considerable standardisation underpins syntheses that aggregate the findings of similar studies (Model 1, e.g. reviews vaccines). Even when key concepts are clear and widely agreed in advance, some syntheses are rapidly tailored to specific settings for a particular policy team (Model 2, e.g. medical malpractice in obstetrics). If a policy team requires evidence urgently, but key concepts are not clear and widely agreed in advance, the policy and research teams develop a consensus together, sometimes with the support of a knowledge broker (Model 3, e.g. public reporting of health system performance improve care). If key concepts are not clear and widely agreed in advance, and evidence is required for global audiences, consensus can be developed among a much broader set of stakeholders (Model 4, e.g. shifting tasks to less qualified staff). Thus the left hand matrix provides the backdrop to diverse models of stakeholder engagement with systematic review production.

The two matrices, one of evidence synthesis and the other of adaptive decisions, are linked by researchers offering evidence (where it is available) for social development and international aid and decision-makers demanding evidence and making decisions whether or not it is available. This framework recognises that researchers and decision-makers encounter uncertainty. Researchers (right hand matrix) encounter uncertainty when required to systematically review literatures where key concepts are not clear or widely agreed in advance. They respond by constructing concepts and definitions, sometimes engaging stakeholders in the process.

8

Figure 1: Generating and sharing knowledge for social development and aid programmes

9

The appropriate choice of engagement model depends up the organisational context and methodological approach to systematic reviewing (Oliver and Dickson 2016). Decision-makers ( in the left hand matrix) encounter uncertainty when available evidence was generated in other contexts, and therefore implementation, if considered appropriate, involves a degree of adaptation and monitoring, based on tacit or experiential knowledge, experimentation and feedback loops,. They also encounter uncertainty if global evidence is absent. In such cases, decisions are informed by engaging local stakeholders to capture their knowledge of context or coping strategies (which is beyond the scope of this paper). The appropriate choice of engagement model depends upon the knowledge already available, the knowledge that is lacking, and local opportunities and constraints. These are considered in more detail in section 2. Although evidence syntheses rather than primary studies are emphasised in this framework, as the relevance and rigour of the systematic review evidence depends upon the relevance and rigour of its constituent studies, it is important to consider models of stakeholders for engagement with primary impact evaluations too.

Overall, the tasks of evidence production and evidence use, and the stakeholder engagement with both of them, sit within a wider system of individual and collective behaviour. What we know about this system comes from behavioural science, political science, social psychology and the sharing of organisational knowledge. This paper therefore presents what we know about specific models of stakeholder engagement and then what we know from wider relevant literatures.

4) Models of stakeholder engagement with research findings This section considers ways to enhance stakeholders engaging with global evidence (section 4a) and the methods for engaging with global and local evidence simultaneously (section 4b). Figure 2 presents some methods to suit the type of decisions they are making. Some decisions are informed primarily by global evidence, especially for international guidance (cell A of figure 2) and some are informed by both global and local evidence, especially for national or local guidance (cell B of figure 2). Stakeholder roles in decisions made in the absence of global evidence already available or global evidence commissioned for the purpose (e.g. public consultations, social learning or community based participatory research, in cells C and D) are beyond the scope of this paper.

10

Figure 2: Decision making models to suit the availability of global and local knowledge (NB decision- making in the absence of global evidence of causality (left hand column) is beyond the scope of this paper)

a. Making decisions with global evidence Traditional models of stakeholder engagement fit within the ‘push’ paradigm of moving research findings to decision-making settings. These initiatives have focussed on engagement with completed research, with a focus on understanding and making use of the research findings. Where researchers are working to put research findings on the desks of decision-makers, there is a need to support decision-makers to make sense of the findings. Conversely, other models aim to increase the usefulness of research to decision-making. These range from engagement with setting research agendas and guiding research through project advisory groups, through to making sense of the findings of completed research, and indeed co-producing research evidence.

As with all models, this picture is a simplification. More realistic is the concept of knowledge mobilisation, where knowledge flows are multidirectional and meanings change along the way (Wieringa and Greenhalgh 2015). Similarly complex is the notion of Mode 2 research where, instead of the production and use of research knowledge being discrete processes, they are intertwined as research is produced within the context of its use (Nowotny et al 2001; Greenhalgh et al 2016). Some order is applied to the messiness of context by thinking in terms of successive stages of change that involve changes in nested levels of context. Cooke (2005) offers a theoretical understanding and empirical evidence of research use developing hand-in-hand with research production, with synergistic changes happening at the levels of individuals, teams, organisations and wider systems. This understanding applies well to the production of evidence syntheses (Oliver et al 2015). Morton (2015a) distinguishes ‘research uptake’ (where research evidence is considered) from ‘research use’ (where something has changed such as increased knowledge and skills about an issue) and from ‘research impact’ (where something in the world has changed). Symbolic use of research is not

11

research impact as research is simply being used to support a decision that has already been made rather than influencing that decision.

Langer et al (2016) combined these ideas in a multi-stage, multi-level framework that accommodated behavioural theory to review the literature on the effectiveness of different strategies to increase research use. They examined the levels at which interventions were applied or had their effect, the mechanisms by this the effect was produced, and the behavioural components that may apply in the application of the mechanisms. They identified 36 existing reviews assessing what interventions work to increase research use. Box 1 summarises the findings from synthesising the 23 reviews rated moderate to high trustworthiness and relevance. (Additional findings from their second review of the wider social science literature are considered in section 6.)

Box 1: What works to increase research use by decision-makers? (Langer et al DATE)

Evidence of effects (evidence use outcome) • Interventions facilitating access to research evidence, for example through communication strategies and evidence repositories, conditional on the intervention design simultaneously trying to enhance decision-makers’ opportunity and motivation to use evidence (reliable evidence).1 • Interventions building decision-makers’ skills to access and make sense of evidence (such as critical appraisal training programmes), conditional on the intervention design simultaneously trying to enhance both capability and motivation to use research evidence (reliable evidence). • Interventions fostering changes to decision-making structures and processes by formalising and embedding one or more of the other mechanisms of change within existing structures and processes (such as evidence-on-demand services integrating push, user-pull and exchange approaches) (cautious evidence).2 • There is reliable evidence that some individual interventions characterised by a highly intense and complex programme design lead to an increase in evidence use. Overall, however, and based solely on observation, simpler and more defined interventions appear to have a better likelihood of success.

Evidence of no effects (evidence use outcome) • Interventions that take a passive approach to communicating evidence that only provide opportunities to use evidence (such as simple dissemination tools) (reliable evidence). • Multi-component interventions that take a passive approach to building EIDM skills (such as seminars and ‘communities of practice’ without active educational components) (cautious evidence). • Skill-building interventions applied at a low intensity (such as a once-off, half a day capacity- building programme) (cautious evidence). • Overall, unstructured interaction and collaboration between decision-makers and researchers tended to have a lower likelihood of success. However, clearly defined, light- touch approaches to facilitating interaction between researchers and decision-makers, engagement in particular, were effective to increase intermediate CMO outcomes (cautious evidence).

Absence of evidence • Interventions building awareness of, and positive attitudes towards, EIDM. • Interventions building agreement on policy-relevant questions and what constitutes fit-for- purpose evidence.

12

1 ‘Reliable’ refers to evidence based on reviews rated high trustworthiness and relevance in the weight of evidence assessment 2 ‘Cautious’ refers to evidence based on reviews rated moderate trustworthiness and relevance.

NB We’re not yet clear which of these findings may be rooted in or transferable to developing countries.

Effective interventions are, between them, facilitating access to research evidence, building decision-makers’ skills to access and make sense of the evidence, and embedding evidence-informed decision-making in structures and processes. The vision is creating a culture of critique, in which potential users of research are encouraged to critically appraise research findings that cross their desks, whether in the media, from published research, or the outputs of their own commissioned evidence.

Langer et al (2016) acknowledge the reliable evidence that some individual interventions characterised by a highly intense and complex programme design lead to an increase in evidence use. Comprehensive training approaches have emerged, which support decision-makers to access, appraise and integrate evidence. There is emerging evidence that when training is more successful when conducted using a workshop approach in which decision-makers are able to share their own priorities and challenges and in which support to make sense of research is then tailored to these needs (Stewart et al 2017). Tailoring the training in this way may well enhance participants’ motivation and opportunities, as well as capacity, a combination shown to increase research use (Langer et al 2016). We also know that on-going support that allows the building of trust between researchers and decision-makers are important for increasing evidence use (Jordaan et al in press). This might be via mentorships, secondments and other longer-term arrangements. In some cases, where trusting relationships exist between the producers and potential users of research, we have seen a range of stakeholders working together to co-produce evidence (see more on this above in relation to collaborative research partnerships).

At the core of many skill building interventions are tools to support critical appraisal of research, a necessary but not sufficient condition to enhance research use (Horsley et al 2011, Langer et al 2016). Many of these tools (see Appendix) guide decision-makers to consider both the rigour and the relevance of the available evidence. Although they typically ask whether the population or intervention of the study being considered is relevant to the readers’ context of interest, they assume that readers are familiar with that context and offer little or no guidance about how to make such judgements.

b. Making decisions with global and local evidence The tools supporting decision-makers’ critical engagement with research typically address the appraisal of individual research reports of a systematic reviews or primary studies addressing one issue at a time. In reality, policy decision makers and other stakeholders are interested in gathering and synthesising evidence many issues at a time, to develop, modify and implement interventions. They struggle to use evidence syntheses that do not cross the same boundaries they are required to with their decision-making, for instance combining basic sciences, operational sciences and systems thinking (Oliver and Dickson 2016). This raises the challenge of just what policy decision makers are to do with the plethora of information available in order to make credible predictions about how to implement an intervention and how to predict its outcomes in a particular individual setting. Here there is a lacuna that could be filled by a CEDIL programme – little guidance is available on what to do and how to do it.

13

Ideally decision makers need to be able to consult a local causal model of what they think will/can happen in the specific context in view. They can be aided in this if there is already available a good 'theory of change' of what is supposed to happen if the intervention is to be successful. They can then look at the steps in the theory of change, one by one, and think through what is necessary to have in place or to get into place if those steps are to carry through in this individual setting.

But at this stage, there is a special problem in synthesising the evidence. Most evidence synthesis methods, including systematic reviews, synthesise evidence about what is roughly the same claim. Making predictions about a specific individual case will involve synthesising evidence about a vast variety of different things. In nice cases there will be some evidence about what has happened on average across sites studied. Beyond this we can hope there will be evidence that can be found about how one stage is supposed to lead to another and about what the moderator variables for these might be in the target setting, some evidence on what might derail the whole thing, etc. Some of this will be very trustworthy, some less clear and for some things that are necessary for the process to carry through there, no evidence at all. How are they to think about all that?

Of course decision makers may not think about any of these things. But whether they do or not, these are the kinds of factors likely to affect success, so it may be a good idea to try to think about them as well as one can in the time and resources available. This means juggling a lot of different pieces of evidence of different quality bearing on different aspects and dealing with big gaps. How are they to manage this? This is an area ripe for research in a CEDIL programme.

One place to look for help is in the legal realm. Synthesising a variety of evidence about a variety of different aspects is a lot like what a jury does in deciding guilt or innocence. There are already tools available to help in representing legal evidence and the relations among the various pieces available, for instance Wigmore charts (Helper et al 2007). While these are not immediately usable for DFID purposes, they could supply a good starting model that could be adapted to the kinds of information, and the evidence for it, that bear on the success of a development intervention in a specific setting.

Once a policy decision has been made how implementation will fare in different contexts may be anticipated by gathering and applying local knowledge. There is a growing literature about discrete choice experiments with hypothetical scenarios to enhance stakeholder engagement as a strategy to improve implementation (Salloum et al 2017).

5) Models of stakeholder engagement with research production The rationale for focusing on stakeholder engagement with research production is to produce research that is more relevant and therefore more likely to be considered by stakeholders. The methods available may be applied to the production of evidence for global use, or the production of evidence for local use when facing specific decisions.

14

Figure 3: Evidence synthesis models to suit common problems and specific decisions

a. Producing generalisable evidence for common problems Enthusiasm for evidence-informed decisions, led by medicine in the 1980s, was followed within a decade by ‘needs-led’ research agendas where policy makers, managers, practitioners and service users played a role in setting priorities for commissioned research. In the published literature about comparative effectiveness research in health, stakeholders were most often patients and then clinicians, who were more commonly involved in earlier (prioritization) than in later (implementation and dissemination) stages of research (Concannon et al 2014).

Establishing ‘needs-led’ research agendas prompted consensus development methods for setting priorities and developing research questions. Given the time these methods take they are best suited for generating evidence addressing common problems (cell 1 in Figure 3), rather than specific urgent decisions. There is a growing literature of priority setting for health research in developing countries where lessons have been collated in a systematic review (McGregor 2014). This review found stakeholder engagement was a common challenge with little information provided on how stakeholders (most often researchers and government) were engaged. ‘Some reports did note that participant confidentiality was essential to ensure unbiased opinions were provided’ (McGregor 2014).

Moving from whole research agendas to individual studies, stakeholder involvement is key to making research relevant to policy and practice decision-makers and the populations they serve. Leading examples are in the area of HIV where patient activists influenced the designs of early drug trials. Now patient advocacy is on a global scale (see, for instance Global Advocacy for HIV Prevention, https://www.avac.org/). Such organisations explicitly aim to drive the research agenda and have patient / client / user perspectives at the forefront (https://www.avac.org/what-we-do).

15

Taking HIV in Africa as a focus, we see all types of stakeholders engaging with on-going studies through various structures (Box 2).

Box 2: Stakeholder engagement in African HIV trials

Community advisory boards are a common mechanism for stakeholder engagement and feedback with local policy officials, front line staff, and clients. Lessons from the field reveal early and continuous engagement as critical to the acceptance of large, community-randomized studies required for prevention research, using methods adapted to local contexts, and ‘building trusting relationships between study stakeholders as necessary precursors to the instrumental goals which strengthen the research quality’ (Simwinga et al 2016)

Co-researchers exemplify the “nothing about me, without me” idea that is strongly rooted in the HIV world and elsewhere. Some advocacy groups, particularly those representing the interests of men who have sex with men in the US, and increasingly unions of female sex workers in developing countries, have been influential in mainstreaming this approach, leading to many projects now including representatives from such groups within the research staff (Blumenthal and DiClemente 2004; Shen et al 2017).

High level advisory boards for large trials include members from organisations that set standards internationally, eg UNAIDS, WHO etc.

The World Health Organisation, the normative body for public health policy commissions both primary studies and evidence syntheses to inform its work, and influence others.

Specific processes for stakeholder engagement apply at different chronological stages of a study. Formally co-developing Theories of Change for primary studies as part of the research process, help establish a shared understanding of the aims of an intervention, the intended pathways of change, and then of interpretation of the findings. Theories of Change workshops have been successfully integrated into the into the MRC framework for complex interventions (De Silva et al 2014).

Stakeholder engagement with the process of evidence synthesis has lagged behind engagement with primary impact evaluations, notwithstanding early examples from the Cochrane Collaboration and the EPPI-Centre, critical reflections of the process and impact in health and social care (leading sectors for stakeholder engagement) has been rare (Boote et al 2011, 2012), particularly for stakeholder engagement with syntheses of qualitative research (Rees 2017). Reflective accounts note concerns about the most appropriate stakeholders to involve, the implications for time and resources, challenges imposed by research ethics systems, and group dynamics, sensitive topics, and episodes of illness (Rees 2017).

b. Producing tailored evidence for specific decisions Research produced for specific decisions is typically informed and shaped by policy makers commissioning it and the people most likely to be affected in their working or personal lives by any findings if subsequently implemented. For primary research this has been distinguished from researcher-led, disciplinary based knowledge production (Mode 1) as research shaped by its context of use (Mode 2) (Gibbons et al 1994). A critical literature review and a case study of studies co- creating knowledge with local stakeholders found that:

16

Key success principles included (1) a systems perspective (assuming emergence, local adaptation, and nonlinearity); (2) the framing of research as a creative enterprise with human experience at its core; and (3) an emphasis on process (the framing of the program, the nature of relationships, and governance and facilitation arrangements, especially the style of leadership and how conflict is managed). (Greenhalgh et al 2016).

A more practical document, developed from academic and iNGO expertise, offers practical guidance for developing such partnerships and engaging with the evidence (Cornish et al 2017).

For the purpose of conducting evidence syntheses for specific decisions, stakeholder engagement is vital to ensure that evidence is tailored and relevant to meet the needs of local policymakers. Greater applicability of evidence can be enhanced by engaging with local stakeholders who help shape and contextualise the review findings and support the identification of gaps in the evidence- base to support future knowledge production. These processes may be supported by a broad set of institutional mechanisms, standardised procedures, interpersonal relationships and communication frameworks (Oliver and Dickson, 2016).

Stakeholder engagement processes are supported by institutional structures which include mechanisms for effective communication, fostering mutual understanding to inform the design and shape of a review and supporting collaborative working practices. Establishing responsive review facilities with stable funding could provide a platform to facilitate knowledge translation process, linking between researcher with specific policy teams and when appropriate making use of experienced knowledge brokers to mediate policy input directly as part of responding to urgent need for evidence (Bornbaum et al., 2015; Ward et al., 2009). The interactions between researcher and policy at different points in the review process would enhance applicability of the knowledge produced for local policy demands. Indeed, when the institutional structure is in place, it can help mitigate some of the barriers to research use, producing timely evidence and providing access to high quality research (Oliver et al., 2014).

The nature and extent of engagement are influenced by types of evidence synthesis products, policy lanscape as well as time and resources available (Dickson K, 2017). Standardised procedures for evidence production and guidelines for engaging with stakeholders may include the process of setting advisory group and managing peer review. This process is essential for providing additional input into the scope of the review, identifying relevant evidence, and preparing feedback on the structure and content of the findings. However, when producing evidence to inform local policy decision, some methodological aspects and stakeholder engagement process ‘may need to be prioritised may need to be prioritised over others, and standardised procedures may need to be adapted, to ensure timeliness and relevance of evidence’ Dickson K, (2017) p. 5.

Effective stakeholder engagement requires interpersonal skills to work creatively within research team and alongside policy teams for providing constructive enquiries and responding to criticism (Edmondson, 1999). Working in safety and trust environments is particularly beneficial in order to produce timely and high quality evidence outputs. To ensure the review remained closely aligned with policy needs and to gain their input in shaping the review, stakeholder consultation needs to take place regularly as the policy agenda can change rapidly. Direct engagement and face-to-face communication between research and policy is a key feature of producing evidence for local policy teams. The style of discussion is exploratory in nature, through social interaction by learning more about issues we might encounter and supporting a productive conversation taken between

17

stakeholders (Lomas J 2007). It builds ‘community in practice’ and fosters ‘collective learning’ to effectively apply knowledge into their practice and decision (Wieringa and Greenhalgh 2015).

6) Lessons from political science, behaviour change theory and social psychology

a. The politics of engagement As mentioned above, a number of synthesis reviews in different sectors underline the importance of politics, and the political and institutional context, in contributing to the likelihood of research uptake. This includes for example health policy (Liverani et al, 2013), nutrition policy (Cullerton et al, 2016), transport policy (Sager, 2007), and low carbon technology policy (Auld et al 2014).

There are also some substantive explorations of this issue in relation to evidence (Parkhurst, 2017); results and evidence in international development (Eyben et al, 2015) and evaluation (Taylor and Balloch, 2005).

Amongst other things these studies note:

• That despite the recognition that politics is important it is often underexplored in research and reviews of evidence based policy processes, as well as in the design of research outreach processes, evaluation and stakeholder engagement; • That there are tried and tested approaches to exploring these issues from political science, organisational studies etc. which could be better drawn from and which might help explain why certain policy decisions were taken and how they might be influenced in the future; • That there is a tendency to see politics as a problem to be got round or bypassed, rather than an inevitable and important part of policy and decision making which is also usually weighing up trade-offs, opportunity costs and moral or ethical questions related to equity, justice, sustainability etc. This often leads to a search for ‘technical’ solutions around for example better communication and framing of findings, or more effective engagement with stakeholders, rather than deliberately factoring in politics. • Or there is a tendency to simply blame the lack of ‘political will’ (or as some have called it political won’t!) as the reason for lack of research uptake or implementation of policy, without any attempt to unpack why that is the case, what the interests are in maintaining the status quo, or what underpinning values, norms or ideas might be at play. • Despite a growing understanding of the non-linear, chaotic nature of policy making there is still a tendency for mental models of the policy and decision making process to assume certain sequences and rational processes based on evidence, which ignore the ongoing interactions of multiple stakeholders and interest groups, as well as values and norms.

If we accept that this is the case then much of the work that has been done in the development sector on 'thinking and working politically' https://twpcommunity.org/ and on 'knowledge, power and politics' (Jones et 2013) and the tools and methods for doing this might be relevant for this paper. An example of politically sensitive research is described in Box 3

Box 3: Example of politically sensitive research in Peru

Here we explore in more depth the importance of considering contextual factors, especially the politics of evidence and policy processes and how this can assist in addressing the barriers to

18

engagement between the worlds of research and policy, outlined earlier in the paper. To illustrate these considerations, we draw on an independent impact assessment of UNICEF and partners’ Multi-Country Study on the Drivers of Violence Affecting Children in Peru (Morton and Casey, 2017). The Multi-Country Study involves evidence synthesis, secondary data analysis and primary research in Italy, Peru, Viet Nam and Zimbabwe, with the aim of increasing understanding of what drives violence affecting children and what are effective strategies for prevention (Maternowska, Potts and Fry, 2016).

Globally, violence affecting children is often a sensitive issue. In Peru, violence was reported by policy makers as not a priority and nationally representative data on violence against children collected in 2013 had not been analysed as it was deemed too politically sensitive (Morton and Casey, 2017). UNICEF acted as a “knowledge broker” and supported the Ministry of Women and Vulnerable Populations (MIMP) to establish a committee to lead the research in Peru. This involved a systematic review of the literature on violence prevention in Peru, as well as secondary data analysis conducted by the Young Lives research team in Peru and Oxford and analysis of previously unanalyzed data on violence affected children in Peru by the Peruvian National Institute of Statistics and Informatics (INEI), with training and technical assistance from the University of Edinburgh. Asides from the research outputs (see Morton and Casey, 2017 for full details) key outcomes from the research have included the findings being used to influence the passage of new legislation on the prohibition of corporal punishment of children in all settings in December 2015, accompanied by a national action plan and new collaboration between the MIMP and Ministry of Finance, with increased resources for tackling and researching violence affecting children.

An impact assessment was conducted by independent researchers using the Research Contributions Framework (Morton, 2015a) and focused on how the research was used and the impact on national legislation and policies. The Framework (see appendix) captures both the research processes and outcomes and includes contextual factors that may support or impede research impact, using the ISM (Individual, Social, Material) Model (Darnton and Horne, 2013).

The impact assessment drew out several factors that contributed to impact. First, engagement and plans for impact were embedded from the outset and informed the focus of the research, going beyond the creation of new knowledge. Second, this informed the building of a partnership approach to the research, recognizing the different roles needed to bring change. Third, this included “knowledge-brokering roles” with UNICEF playing a central role, and fourth, building in local research and analysis capacity building as a core component of the approach, developing local ownership (Morton and Casey, 2017: 29). This combination of actors was important in keeping attention on the contextual and political dynamics, particularly following the change of government in 2016. The partnership decided to launch the national report following the Presidential election so that it would not be rejected as belonging to a previous administration and having more traction with the new government. The impact assessment found that “the specific combination of research outputs, awareness-raising, capacity-building and knowledge brokering activities, built on this partnership approach, and maximised impact” (ibid., 5).

Whereas the four factors outlined above have similarities with findings from other studies (Oliver et al. 2014) a fifth critical factor was identified as the value brought by the research being part of a multi-country study. Given the sensitivities of violence affecting children, this was important politically, in avoiding appearing to suggest that this was a phenomenon unique to Peru, but instead

19

enabling stakeholders to situate findings and discussions in the wider regional and international context (Morton and Casey, 2017: 30).

The political science analysis of engagement with evidence, illustrated by children and violence in Peru, prompt some the practical ideas which might be explored by a CEDIL research agenda:

• What might everyday political analysis look like when it comes to this agenda? See (Hudson et al 2016) for an attempt to do this for staff in development programs. • How can the knowledge-policy interface and actors can be mapped? See Jones et al (2013b) for at attempt to do this in development programs. • Could partnership brokering early in the research process assist in developing mutual understanding of the political pressures and incentives which are at play for both researchers and policy makers? See http://www.partnershipbrokers.org/ as an example of an emerging experience in the art of building partnership which includes an assessment of the politics and interests of all parties. • Would immersive or ‘rapid ethnography’ type processes help uncover some of the political and institutional dynamics which shape policy and decision-making? Could one imagine a variant of the Reality-Check Approach or other approaches designed to explore the social and psychological drivers of organisational behaviour (WDR 2015 Chapter 11). • Could one explore models for the governance of evidence systems as suggested by Justin Parkhurst (2017)? See Table 7.1 in his recent book which describes a ‘legitimacy framework for evidence informed policy processes’ designed to balance scientific and the social/political dimensions.

Lastly, there is the possibility of assessing tools and methods to support evidence use in terms of the degree to which they do, or could, help to unpack the ‘black box’ of political will in helpful ways.

b. Behavioural change and engagement Insights from the social science literature are available from a scoping review of the broad social science literature that identified 67 interventions of potential relevance to evidence-informed decision-making (Langer et al. 2016) (See Box 4)

Box 4: Insights from social science knowledge to support research use (Langer et al 2016)

Promote and market behavioural norms • Social science knowledge on the creation of behavioural norms could be used to support the formation of social or professional evidence use norms. Effective social science interventions to build such norms include social marketing, social incentives, and identity cues. Engage in advocacy and awareness raising for the concept of EIDM • Social science research suggests that advocacy and awareness-raising campaigns are effective in supporting behavioural change. These strategies could be applied to communicate and popularise the concept of EIDM to increase awareness for the benefits of using evidence during decision-making as well as the risks of not doing so. Effectively frame and formulate communicated messages • Social science literature on effective communication suggested many techniques and strategies that can be used to enhance the communication of research evidence. Framing of messages, tailoring communication including audience segmentation, and regular use of reminders are examples of effective communication techniques. Design appealing and user-friendly access platforms and resources

20

• The social science literature features a rapidly growing body of knowledge on information design. Interventions aiming to improve decision-makers’ access to evidence could directly draw from this knowledge to enhance the design of evidence repositories and other resources, and develop evidence informed decision-making apps. Build a professional identity with common practices and standards of conduct • Social science insights on social influence, collaboration, relationship building, and group interaction could be used to improve the design and outcomes of interaction interventions. Interaction among professionals can build a professional identity with common practices and standards of conduct (through, for example, communities of practice, mentoring, and inter- professional education). Making the building of a professional identity relating to evidence use a key objective of future interaction interventions would, in turn, entail a greater emphasis on facilitating interactions between different decision-makers to fully harness the power of social influence and peer-to-peer interaction. Foster adult learning • Social science knowledge on adult learning theories and principles is of direct use and relevance to evidence-informed decision-making capacity-building. Integrating this body of knowledge more closely with evidence-informed decision-making is likely to enhance the long-term performance of interventions supporting decision-makers’ skills. Build organisational capacities and support organisational change • Social science research on organisational learning and cultures, management and leadership techniques, and other changes to organisational processes and structures (for example, facilitation), is of direct benefit to interventions aiming to increase the receptivity of decision- making processes and structure to evidence use. A closer integration of this body of knowledge could enhance the appetite and readiness of organisations to use evidence. Use behavioural techniques, including nudges • A developing body of social science knowledge focuses on the influence of behavioural factors (such as cognitive loads) on individual decision-making processes. It has also developed effective techniques to reduce cognitive biases and enhance decision-makers’ choice architectures. Supporting the use of evidence during decision-making similarly could be subject to these techniques and the design of evidence use nudges could provide a valuable tool. Behavioural sciences stress the importance of salience in the design of interventions, which could directly be applied to support the practice of EIDM. Exploit the potential of online and mobile technologies • The application of online and mobile technologies is suggested in the social science literature to increase the reach, convenience, and appeal of interventions. A range of EIDM interventions (e.g. communication, capacity-building, decision aids) could benefit from the integration and regular use of online and mobile technologies. Institutional frameworks and mechanisms • Institutional frameworks and mechanisms can advocate and nurture structural changes at all levels of decision-making. In the context of EIDM, effective examples include accreditation processes, clearinghouses such as the National Institute for Health and Care Excellence (NICE), and government ministries. Overall, however, not enough rigorous evaluation in this area is taking place.

c. Social psychology of engagement Many of the methods for stakeholder engagement involvement collective deliberation and decision- making informed by highly technical issues. A key example of this is guidance development panels. Internationally recognised standards for appraising the outputs of panels (Brouwers et al 2010) emphasise stakeholder membership but have little to say about how stakeholders engage with the

21

evidence or with each other. More can be learnt from these challenges by turning to the social psychology literature about small group decision making. A rapid review of reviews covering this literature identified key factors about the performance of committees making (see Box 5).

Box 5: Empirical evidence about group decision-making (Oliver et al 2015).

Groups with six to twelve members tending to perform better than those either smaller or larger. Groups with diverse backgrounds and specialities take account of a range of opinions, with each member likely to advocate familiar options. Larger groups reflect a broader range of outside interests, enhance credibility and widespread acceptance and implementation of decisions, but present difficulties for encouraging equal participation. However, if conflict is managed constructively, more varied membership leads to better performance and more reliable judgements. Corporate board leaders are important for establishing inclusive working procedures and an atmosphere of openness, dialogue and trust. Chairs facilitate groups to generate more ideas through encouraging members to express diverse opinions. Little is known about the impact of training for committee work or the physical environment.

How the factors in Box 5 influence committee performance is explained by theoretical models (Hopthrow et al. 2011). Our understanding of various combined models is described briefly here.

Committee performance depends upon the individuals involved, their attributes and relationships, specifically, members who: are aware of their tasks, roles and responsibilities; understand the wider context and culture; bring analytical and political competence, interest and willingness; offer time and commitment; actively participate; and behave appropriately over external relationships, confidentially and conflicts of interest.

An important resource is the knowledge brought by individual members, which is unevenly distributed, or presented to them in committee papers or presentations. Demographic diversity has been seen as valuable in bringing different perspectives and a wider variety of alternatives for consideration. Educational and functional diversity has given teams greater strategic clarity.

In addition to the knowledge and skills is the time available for a committee to explore that knowledge to make choices or solve problems. Time for information processing during decision- making allows more sharing of knowledge; the more knowledge is shared during discussion, the more it is subject to evaluation by group members. When time is limited, less knowledge is shared and decisions are more the result of negotiating between prior preferences, rather than evaluation of shared knowledge. When tasks involve judgements (rather than intellectual problem solving), status within the group influences decisions.

With more time, greater facilitation skills to maximise sharing of knowledge, and greater mutual trust developed as committees mature and members get to know each other, more information about all options is revealed and available for evaluation. The result is more sharing of ideas and individual learning, better quality decisions, more commitment to decisions by group members and wider acceptability of decisions within the group’s wider networks.

22

7) Discussion and ways forward This paper explores stakeholder engagement with research evidence. A theoretically driven search strategy identified an extensive literature of stakeholder engagement with both the research process (to make research more relevant to stakeholders) and research findings. The literatures about engaging with the research process and with research findings are both mature with many existing syntheses are readily available, analysis focused primarily on existing evidence syntheses, supplemented by key papers identified by experts in stakeholder engagement.

Methods and models for stakeholder engagement were analysed within a framework combining existing frameworks from evidence synthesis and adaptive decision-making that resonate with what we know from theory and empirical evidence about generating relevant evidence, evidence- informed decision-making, capacity strengthening for research and its use and behaviour change.

Effective stakeholder engagement with research findings includes: facilitating access to research evidence, building decision-makers’ skills to access and make sense of evidence, and fostering changes to decision-making structures and processes. Evidence syntheses of stakeholder engagement with the research process tend to advance our understanding rather than offer evidence of effective intervention. Further understanding comes from political science, the science of behaviour change and social psychology.

Overall, methods and tools are more developed for engaging stakeholders with research findings than with the research process. They are better developed for stakeholders making sense of evidence addressing relatively simple questions, such as choosing between practices or programmes, than for making sense of more complex questions such as those about implementation or systems. Nor are methods and tools available to support stakeholders making sense of the findings of more than one study (evidence synthesis or primary study) at a time, or to combine learning from evidence syntheses with local knowledge or political context. At the same time, syntheses are better developed and more readily available to offer answers to the relatively simple questions than the more complex questions, and are largely insensitive to contextual issues.

In summary, available evidence about stakeholders engaging with global and local evidence is in practice more about engaging with global evidence than local evidence; and there is a lack of tools to support decision-makers combining evidence from different sources.

Stakeholder engagement would benefit from greater clarity about: the meaning of stakeholder engagement, the meaning of adaptive decisions, how to choose between the stakeholder engagement methods and tools currently available, and the development of methods and tools for engaging stakeholders with research about the complex issues that feature in their own decision- making.

Acknowledgements We are grateful to: Paul Garner for discussions about evidence and adaptive programming; and Kate Conroy, Charlotte Maugham and Catherine Renshaw for discussions about evidence in humanitarian settings.

References Auld, G. Mallett, A. Burlica, B. Nolan-Poupart, F. Slater, R. (2014) ‘Evaluating the effects of policy innovations: Lessons from a systematic review of policies promoting low-carbon technology’, Global Environmental Change, Volume 29, pp. 444-458.

23

Blumenthal D, DiClemente R. Community-based health research: issues and methods. New York: Springer Publishing Company, 2004.

Boote J, Baird W, Sutton A. Public involvement in the systematic review process in health and social care: a narrative review of case examples. Health Policy. 2011;102:105-16.

Boote J, Baird W, Sutton A. Involving the public in systematic reviews: a narrative review of organizational approaches and eight case examples. Journal of comparative effectiveness research. 2012;1(5):409-20.

Bornbaum CC, Kornas K, Peirson L, Rosella LC (2015) Exploring the function and effectiveness of knowledge brokers as facilitators of knowledge translation in health-related settings: a systematic review and thematic analysis. Implementation Science 10: 162.

Boswell C, Smith K (2017) Rethinking policy ‘impact’: four models of research-policy relations. Palgrave Communications 3, 44. doi:10.1057/s41599-017-0042-z

Brouwers, M., Kho, M.E., Browman, G.P., Burgers, J.S., Cluzeau, F., Feder, G., Fervers, B., Graham, I.D., Grimshaw, J., Hanna, S., Littlejohns, P., Makarski, J., Zitzelsberger, L. for the AGREE. 2010. “Next Steps Consortium (2010) AGREE II: Advancing guideline development, reporting and evaluation in healthcare." Can Med Assoc J. 182:E839-842; doi:10.1503/090449.

Concannon TW, Fuster M, Saunders T, Patel K, Wong JB, Leslie LK, Lau J. (2014). A Systematic Review of Stakeholder Engagement in Comparative Effectiveness and Patient-Centered Outcomes Research. Journal of General Internal Medicine, 29(12), 1692–1701. http://doi.org/10.1007/s11606-014-2878-x

Cornish H, Fransman J, Newman K et al (2017) Rethinking research partnerships: discussion guide and toolkit. Christian Aid, ESRC and The Open University. https://www.christianaid.org.uk/resources/about-us/rethinking-research-partnerships

Cullerton, K. Donnet, T. Lee, A., & Gallegos, D. (2016) ‘Using political science to progress public health nutrition: a systematic review’. Public Health Nutrition, 19(11), pp. 2070-2078

Darnton, A. and Horne, J., 2013. Influencing behaviours: moving beyond the individual: a user guide to the ISM Tool. Scottish Government.

Davies HTO, Powell AE, and Nutley SM (2015) Mobilising knowledge to improve UK health care: learning from other countries and other sectors. NIHR Library.

De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, Patel V (2014) Theory of Change: a theory-driven approach to enhance the Medical Research Council's framework for complex interventions. Trials 15:267 https://doi.org/10.1186/1745-6215-15-267

Dickson K (2017) Systematic reviews to inform policy: institutional mechanisms and social interactions to support their production, Unpublished, UCL Dissertation.

Edmondson A (1999) Psychological safety and learning behavior in work teams. Administrative science quarterly, 44, 350-383.

Eyben, R. Guijt, I. Roche, C. and Shutt, C. (2015) ‘The Politics of Evidence and Results in International Development: Playing the game to change the rules?’. Practical Action, Rugby, UK.

24

Flodgren G, Parmelli E, Doumit G, Gattellari M, O'Brien MA, Grimshaw J, Eccles MP. (2011) Local opinion leaders: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 8. Art. No.: CD000125. DOI: 10.1002/14651858.CD000125.pub4

Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P; Trow M (1994) The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage. ISBN 0- 8039-7794-8.

Green D. (2015) Doing Development Differently: a great discussion on Adaptive Management (no, really). Blog https://oxfamblogs.org/fp2p/doing-development-differently-a-great-discussion-on- adaptive-management-no-really/

Greenhalgh T, Jackson C, Shaw S, Janamian T (2016) Achieving Research Impact Through Co-creation in Community-Based Health Services: Literature Review and Case Study. Milbank Quarterly 94(2) 392-429.

Hasselman L. (2017) Adaptive management; adaptive co-management; adaptive governance: what’s the difference?, Australasian Journal of Environmental Management, 24:1, 31-46.

Hepler A, Dawid AP, Leucari V (2007) Object-oriented graphical representations of complex patterns of evidence. Law, Probability and Risk 6, 275−293

Horsley T, Hyde C, Santesso N, Parkes J, Milne R, Stewart R. Teaching critical appraisal skills in healthcare settings. Cochrane Database of Systematic Reviews 2011, Issue 11. Art. No.: CD001270. DOI: 10.1002/14651858.CD001270.pub2.

Hudson, D. Marquette, H, Waldock, S (2016) Everyday Political Analysis, Developmental Leadership Program, University of Birmingham http://www.dlprog.org/research/everyday-political-analysis.php

Humanitarian Leadership Academy (2017) Local knowledge in humanitarian response.

Institute of Development Studies (2015) Learning to Adapt: Exploring Knowledge, Information and Data for Adaptive Programmes and Policies. Workshop Summary Report.

Jones, H. Jones, N. Shaxson, L. and Walker, D. (2013a) Knowledge, policy and power in international development: a practical framework for improving policy, ODI https://www.odi.org/publications/7214-knowledge-policy-power-international-development- practical-framework-improving-policy

Jones, H. Jones, N. Shaxson, L. and Walker, D. (2013b) Providing practical guidance for in-country programming: the value of analysing knowledge, policy and power, ODI. https://www.odi.org/publications/7208-providing-practical-guidance-country-programming-value- analysing-knowledge-policy-and-power Jordaan S, Stewart R, Erasmus Y, Maluwa L, Mitchell J, Langer L, Wildeman R, Tannous N, Koch J. (in press) ‘Reflections on mentoring experiences for evidence-informed decision- making in South Africa and Malawi’. Development in Practice. ISSN 0961-4524 (article in press).

Kislov, R, Wilson, P, Boaden R, (2017) The ‘dark side’ of knowledge brokering. Journal of Health Services Research & Policy 22(2) 107–112.

25

Koon A, Rao K, Tran N, Ghaffar A. Embedding health policy and systems research into decision- making processes in low- and middle-income countries. Health Research Policy and Systems. 2013;11(30).

Langer L, Tripney J, Gough D (2016). The Science of Using Science: Researching the Use of Research Evidence in Decision-Making. London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London.

Lavis, JN,Robertson, D,Woodside, JM,Mcleod, CB Abelson, J (2003). How can research organizations more effectively transfer research knowledge to decision makers? The Milbank Quarterly, 81, 221- 248.

Lomas J (2007) The in-between world of knowledge brokering, BMJ, 334:129.

Liverani M, Hawkins B, Parkhurst JO (2013) ‘Political and Institutional Influences on the Use of Evidence in Public Health Policy. A Systematic Review’. PLoS ONE 8(10): e77404. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0077404

Lynch U, McGrellis S, Dutschke M, Anderson M, Arnsberger P, Macdonald G (2013) What is the evidence that the establishment or use of community accountability mechanisms and processes improves inclusive service delivery by governments, donors and NGOs to communities? London: EPPI- Centre, Social Science Research Unit, Institute of Education, . ISBN: 978-1-907345-60-9

McGregor S, Henderson KJ, Kaldor JM (2014) How Are Health Research Priorities Set in Low and Middle Income Countries? A Systematic Review of Published Reports. PLoS ONE 9(10): e108787. https://doi.org/10.1371/journal.pone.0108787

Mitton C, Adair C, Mckenzie E, Patten S, Wayne B (2007) Knowledge Transfer and Exchange: Review and Synthesis of the Literature. Milbank Quarterly 85: 729-768.

Maternowska, Mary Catherine; Potts, Alina; Fry, Deborah (2016). The multi-country study on the drivers of violence affecting children. A cross-country snapshot of findings, UNICEF Office of Research - Innocenti, Florence

Morton, S., 2015a. Progressing research impact assessment: A ‘contributions’ approach. Research Evaluation, 24(4), pp. 405-19.

Morton, S., 2015b. Creating research impact: the roles of research users in interactive research mobilisation. Evidence & Policy: A Journal of Research, Debate and Practice, 11(1), pp.35-55.

Morton, S and Casey, T 2017 Changing National Policy on Violence affecting Children: An impact assessment of UNICEF and partners’ Multi-Country Study on the Drivers of Violence affecting Children in Peru. University of Edinburgh.

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas, J (2014). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC health services research, 14, 2.

Oliver S, Bangpan M, Stansfield, Stewart R (2015) Capacity for conducting systematic reviews in Low and Middle Income Countries: a rapid appraisal. Health Research Policy and Systems 13:23 doi:10.1186/s12961-015-0012-0

Oliver S, Hollingworth K, Briner R (2015) Effectiveness and efficiency of committee work: a rapid systematic review for NICE by its Research Support Unit. Report of the Research Support Unit for the

26

National Institute for Health and Care Excellence. London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London. ISBN: 978-1-907345-82-1

Oliver S, Dickson K (2016) Policy-relevant systematic reviews to strengthen health systems: models and mechanisms to support their production. Evidence and Policy. 12 (2) 235-259. Oliver S, Hollingworth K, Briner R (2015) Effectiveness and efficiency of committee work: a rapid systematic review for NICE by its Research Support Unit. Report of the Research Support Unit for the National Institute for Health and Care Excellence. London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London. ISBN: 978-1-907345-82-1

Oliver S, Liabo K, Stewart R and Rees R (2014) Public involvement in research: making sense of the diversity. J Health Serv Res Policy 20(1):45-51.

Olivier C, Hunt MR, Ridde V. (2016) NGO–researcher partnerships in global health research: benefits, challenges, and approaches that promote success. Dev Pract. 26:444–55. Parkhurst, J. (2017) The Politics of Evidence: From Evidence Based Policy to the Good Governance of Evidence’, Routledge, Abingdon, Oxford.

Plummer, R., B. Crona, D. R. Armitage, P. Olsson, M. Tengö, and O. Yudina. (2012) Adaptive comanagement: a systematic review and analysis. Ecology and Society 17(3): 11.

Rees R (2017) Inclusive approaches to the synthesis of qualitative research: harnessing perspectives and participation to inform equitable public health policy. Unpublished PhD thesis, UCL Institute of Education, London.

Sager, F. (2007) 'Making transport policy work: polity, policy, politics and systematic review', Policy & Politics, vol. 35, no. 2, pp. 269-288

Salloum RG, Shenkman EA, Louviere JJ, Chambers DA. (2017) Application of discrete choice experiments to enhance stakeholder engagement as a strategy for advancing implementation: a systematic review. Implement Sci. 23;12(1):140. doi: 10.1186/s13012-017-0675-8. Review.

Shen S, Doyle-Thomas KAR, Beesley L, Karmali A, Williams L, Tanel N, McPherson AC. (2017) How and why should we engage parents as co-researchers in health research? A scoping review of current practices. Health Expect. 20(4):543-554. doi: 10.1111/hex.12490. Epub 2016 Aug 12.

Simwinga M, Bond V, Makola N, Hoddinott G, Belemu S, White R, Shanaube K, Seeley J, Moore A; HPTN 071 (PopART) study team. (2016) Implementing Community Engagement for Combination Prevention: Lessons Learnt From the First Year of the HPTN 071 (PopART) Community-Randomized Stdy. Curr HIV/AIDS Rep. 4:194-201. doi: 10.1007/s11904-016-0322-z.

Stewart R, Langer L, Wildemann R, Maluwa L, Erasmus Y, et al. (2017) ‘Building capacity for evidence-informed decision-making: an example from South Africa.’ Evidence & Policy. https://doi.org/10.1332/174426417X14890741484716 Sullivan T (2011) Embracing Complexity. Harvard Business Review, September.

Taylor FW (1911) The principles of scientific management. New York, London, Harper and Brothers.

Taylor D, Balloch S. (Eds) (2005) The Politics of Evaluation: Participation and Policy Implementation. Bristol: Policy Press.

Ward V, House A, Hamer S (2009) Knowledge brokering: the missing link in the evidence to action chain? Evidence & policy: a journal of research, debate and practice, 5, 267-279

27

Waterman H,Tillen D, Dickson R, de Koning K. Action research: a systematic review and guidance for assessment. Health Technol Assess 2001;5(23).

Westhorp G, Walker DW, Rogers P, Overbeeke N, Ball D, Brice G (2014) Enhancing community accountability, empowerment and education outcomes in low and middle-income countries: a realist review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London. ISBN: 978-1-907345-72-2

Wilson PM, Petticrew M, Calnan MW, Nazareth I. (2010) Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks. Implementation Science 5:91.

Wieringa S and Greenhalgh T (2015) 10 years of mindlines: a systematic review and commentary. Implementation Science.10:45

World Bank (2015) ‘Adaptive design, adaptive interventions’, Chapter 11 Word Development Report 2015, World Bank, Washington DC. http://www.worldbank.org/en/publication/wdr2015

28

Appendix 1: Research Contributions Frameworks outcomes chain (Morton, 2015a)

.

29

Appendix 2: Guidance for stakeholder engagement

To ADD

30