Evaluation Units As Knowledge Brokers: Testing and Calibrating An

Evaluation Units As Knowledge Brokers: Testing and Calibrating An

EVI0010.1177/1356389016638752EvaluationOlejniczak et al.: Evaluation units as knowledge brokers 638752research-article2016 Article Evaluation 2016, Vol. 22(2) 168 –189 Evaluation units as knowledge © The Author(s) 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav brokers: Testing and calibrating DOI: 10.1177/1356389016638752 an innovative framework evi.sagepub.com Karol Olejniczak Centre for European Regional and Local Studies (EUROREG), University of Warsaw, Poland Estelle Raimondo George Washington University, USA Tomasz Kupiec EGO s.c. – Evaluation for Government Organizations, Poland Abstract Evaluation units, located within public institutions, are important actors responsible for the production and dissemination of evaluative knowledge in complex programming and institutional settings. The current evaluation literature does not adequately explain their role in fostering better evaluation use. The article offers an empirically tested framework for the analysis of the role of evaluation units as knowledge brokers. It is based on a systematic, interdisciplinary literature review and empirical research on evaluation units in Poland within the context of the European Union Cohesion Policy, with complementary evidence from the US federal government and international organizations. In the proposed framework, evaluation units are to perform six types of brokering activities: identifying knowledge users’ needs, acquiring credible knowledge, feeding it to users, building networks between producers and users, accumulating knowledge over time and promoting an evidence-based culture. This framework transforms evaluation units from mere buyers of expertise and producers of isolated reports into animators of reflexive social learning that steer streams of knowledge to decision makers. Keywords evaluation use, evidence-based policies, knowledge broker, know–do gap, organizational learning Corresponding author: Karol Olejniczak, Centre for European Regional and Local Studies (EUROREG), University of Warsaw, Krakowskie Przedmiescie 30, 00-927 Warsaw Poland. Email: [email protected] Downloaded from evi.sagepub.com by guest on April 20, 2016 Olejniczak et al.: Evaluation units as knowledge brokers 169 Introduction One of the key functions of evaluation is to build a stronger knowledge base for policy mak- ing. In principle, evaluation results should support a decision-making process with useful insight and understanding of factors underlying public problems, possible change mechanisms and the fit between those factors and the policy context (Chelimsky, 2006; Mark et al., 2000; Stern, 2002). In practice, the conceptual and instrumental use of evaluation have been quite limited, and evaluation is often used symbolically (e.g. McNulty, 2012). In complex policy and organizational settings the evaluation tasks are assigned to specifi- cally designated teams of public professionals called ‘evaluation units’. For example, in the case of the Cohesion Policy – one of the most intensively evaluated European Union policies (Bachtler and Wren, 2006) – there are over 200 evaluation units across 28 EU member states.1 The units commission studies and convey their findings to institutions responsible for plan- ning and implementing Cohesion Policy interventions. They tend to be small (one to seven people) and share their responsibilities between evaluation and monitoring assignments as well as, less commonly, strategic planning.2 Both research and practice indicate that despite the extensive production of evaluation reports, the utilization of knowledge by decision makers remains limited, and the commitment of top management to evaluation activities remains low (De Laat and Williams, 2014; EGO, 2010; Kupiec, 2014; Olejniczak, 2013). Decision makers often bemoan that evaluation studies do not meet their information needs, which raises a practical question for evaluation units: how can they combine the inflow of single reports into streams of knowledge that in turn would address knowledge needs of decision makers and managers? This practical question is connected with the more general issue of finding effective mechanisms to ensure that evalua- tions are used and influential in complex institutional environments (Højlund, 2014a; Raimondo, 2015; Rist and Stame, 2011). The current evaluation literature does not explain well the reality of evaluation use in com- plex program and institutional settings, such as the Cohesion Policy. First, the dominant approach is to analyse use from the perspective of single reports, underestimating that change in decision making is cumulative, triggered by evidence streams trickling down into pro- grammes or organizations (e.g. Ferry and Olejniczak, 2008; Højlund, 2014a, 2014b; Leeuw and Furubo, 2008). Second, the evaluation literature explores relations between evaluators (producers) and direct users. It overlooks the fact that government units that contract out evaluations are usually not the final users of the reports but rather intermediaries between knowledge producers (evaluators) and actors involved in policy decisions (Johnson et al., 2009). Finally, there is little literature on the role of evaluation units in general, and a dearth of empirical evidence on their work as intermediary between producers and users in particular (Läubli Loud and Mayne, 2014). A new conceptual framework is therefore needed to fully explain the role of evaluation units in steering knowledge flows from producers to users. This article proposes such a framework for analysing and reshaping evaluation units’ role by taking a new perspective. First, the use of evaluation is analysed from a systems perspective – as flows of information and knowledge streams (Rist and Stame, 2011). Evaluation units are in turn portrayed as knowledge brokers (KB), playing the role of intermediaries between producers and users in charge of accumulating and steering knowledge flows. Moreover, it is assumed that the cur- rent challenge faced by evaluation units in enhancing evaluation use is a particular Downloaded from evi.sagepub.com by guest on April 20, 2016 170 Evaluation 22(2) manifestation of a more general problem: the know–do gap, which is quite common across different disciplines and policy fields (e.g. gap between universities research vs its business applications, laboratory discoveries vs general health-care practice, scientific evidence on environmental change vs actual policy decisions). The article thus seeks to bring insight from organizational theory as well as a wide literature on knowledge transfer and management into the evaluation literature with a view to better understanding how different disciplines deal with know–do gaps through knowledge brokering. We argue that a key missing piece in the evidence-based decision-making theory is the idea that bringing credible and rigorous evi- dence to decision makers is not sufficient; the evidence needs to be ‘brokered’. This research sets out to define what brokering knowledge means for evaluation units and what strategic changes they need in order to be successful at promoting knowledge use. The article contrib- utes to evaluation practice in complex institutional settings. Experts who are architects of evaluation systems will gain understanding of inter-institutional design flaws that hamper systemic knowledge use. Personnel of evaluation units and evaluators can draw lessons on effective knowledge brokerage strategies. The structure of the article consists of four sections. In the first section research questions and methods are presented. Next, we discuss the main findings from the systematic literature review on definition, activities and roles of KB. In the third section, the article presents a framework of evaluation units as KB. Each part of the framework is explicated and illustrated with the findings emanating from empirical research. We conclude with a discussion of the main strengths and limitations of the framework, perspectives for its practical applications and directions for future research. The research approach The article addresses two overarching research questions: (1) What can we learn about effective ‘knowledge brokering’ from the literature across different disciplines and policy fields? (2) How can the concept of knowledge brokering be adapted to evaluation units and help them tackle the challenge of evaluation under-utilization? In this research we followed the spirit of realist evaluation (Pawson, 2006, 2013; Pawson and Tilley, 1997) which consists of iteratively building, testing and calibrating a theory of change of an intervention, here knowledge brokering. As suitable within an under-theorized area, we aimed for breadth more than depth of understanding and we engaged in both theory-building and theory-testing. The research moved through four iterative components of theory develop- ment and testing using a mixed-methods approach: 1. Development of an initial conceptual framework, built on the basis of a literature review – later revised to a full-fledged theory of change; 2. Initial empirical test of the conceptual framework in the context of Polish evalua- tion units – development of an initial typology of brokering activities of evaluation units; 3. Development of a full-fledged theory of change, including: operationalization of the key term knowledge brokers, refinement of the typology of activities, identification of Downloaded from evi.sagepub.com by guest on April 20, 2016 Olejniczak et al.: Evaluation units as knowledge brokers 171 causal

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    22 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us