Re-Examining the Implications of Systems Thinking for Evaluation
Total Page:16
File Type:pdf, Size:1020Kb
RE-EXAMINING THE IMPLICATIONS OF SYSTEMS THINKING FOR EVALUATION BY EMILY F. GATES DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Educational Psychology in the Graduate College of the University of Illinois at Urbana-Champaign, 2016 Urbana, Illinois Doctoral Committee: Professor Jennifer C. Greene, Chair Professor Thomas A. Schwandt, Director of Research Professor Lizanne DeStefano Professor Martin Reynolds, Open University ABSTRACT Over the last twenty years, many prominent evaluators have been borrowing and using ideas, theories, and methods from the systems and complexity fields with little research on the implications of this trend for the evaluation field. This thesis examines this borrowing to identify over-arching implications for evaluation theory and practice. The first paper reviews inter- disciplinary literature on systems thinking and complexity science with regards to evaluating social policies and programs and identifies major implications for how evaluators theorize evaluation practice. The second paper reports on an analysis of eight cases of evaluation practice that use systems and complexity ideas and techniques and presents findings regarding how evaluators conceive of and practice evaluation. The third paper advances an argument for how evaluators can use critical systems heuristics to surface, reflect on, and make explicit the values that influence and should influence an evaluation. Collectively, these papers support the potential of borrowing from the systems and complexity fields to expand on and re-define evaluation theory and practice. ii ACKNOWLEDGMENTS This dissertation would not have been possible without the mentorship of Dr. Thomas Schwandt and Dr. Jennifer Greene. I am thankful to Tom for overseeing this dissertation, sharing his wisdom, providing generous editorial comments, and continuously challenging and expanding my thinking. I am thankful to Jennifer for her early and ongoing encouragement of this research, for being a supportive advisor throughout the years, and for continuously inspiring me to be a more thoughtful and compassionate scholar and evaluator. I would also like to thank Dr. Lizanne DeStefano for fostering my development as an evaluator, Dr. Martin Reynolds for paving a way in critical systems and evaluation, and Bob Williams for sharing his brilliant enthusiasm for evaluation and systems thinking – and to all five for investing significant time and energy in this project as committee members. I want to express my gratitude to the eight evaluators I interviewed for their time, insights, and leadership in the field. Additionally, I want to acknowledge the University of Illinois Graduate College for providing funding to complete this dissertation. I could not have accomplished this project without the support of my colleagues, friends, and family. My thanks to my QUERIES, I-STEM, WIDER, and ISSS Ph.D. colleagues for providing motivation, feedback, and a sense of community, to Carolina (for friendship from day one), Ayesha (for nudging me to come to UIUC and stick with it), Gabi (for sharing the ups and downs), Sarai (for reminding me to smile), Megan-Brette (for reminding me of the reasons we do this), Lorna (for providing a space to unwind), Priya (for your refreshing visits), Molly (for your intellectual companionship), Lisa (for giving me candles to light the way), and to Mbikayi (for listening to my ramblings and providing constant encouragement). Last but not least, I want to thank my parents, Gerry and Cindy, for their unconditional emotional and financial support and Tyler, Austin, and especially Carlye for being there in more ways than I can name. iii TABLE OF CONTENTS INTRODUCTION ........................................................................................................................... 1 REFERENCES .................................................................................................................... 7 PAPER 1: IMPLICATIONS OF SYSTEMS THINKING AND COMPLEXITY SCIENCE FOR EVALUATION .................................................................................................................... 10 REFERENCES .................................................................................................................. 39 PAPER 2: BORROWING KNOWLEDGE FROM THE SYSTEMS AND COMPLEXITY FIELDS: IMPLICATIONS FOR EVALUATION PRACTICE ................................................... 47 REFERENCES .................................................................................................................. 94 PAPER 3: VALUING IN EVALUATION WITH CRITICAL SYSTEMS HEURISTICS ....... 102 REFERENCES ................................................................................................................ 135 CONCLUSION ........................................................................................................................... 141 REFERENCES ............................................................................................................................ 144 APPENDICES ............................................................................................................................. 145 Appendix A: Interview Guide ......................................................................................... 145 Appendix B: Case Materials ............................................................................................ 147 iv INTRODUCTION Throughout the development of the evaluation field, evaluators have borrowed knowledge from other fields and disciplines. Consider, for example, the intersection of anthropology and evaluation to better understand the contexts of evaluands (Fetterman, 1984; Butler & Copeland-Carson, 2005); contributions of organizational theories of learning to strengthen evaluation use (Russ-eft & Preskill, 2009); political philosophy for envisioning the roles of evaluation in democracy (House & Howe, 1999); and econometric methods for assessing resource use and value (Yates, 2012). In the last fifteen years, many prominent evaluators have been borrowing ways of thinking (i.e. ideas, concepts, theories) and techniques (i.e. methods, tools, approaches) from the systems and complexity fields--an interdisciplinary area of inquiry including operational research, systems science, complexity science, and cybernetics that broadly focuses on the study of systems and the design of interventions to address problematic situations (Eoyang, 1999; Sanderson, 2000; Barnes et al, 2003; Williams and Imam, 2007; Callaghan, 2008; Rogers, 2008; Morell, 2010; Forss, Marra, & Schwartz, 2011; Patton, 2011; Westhorp, 2012; Byrne, 2013). One significant motivation for this borrowing arises from criticisms of longstanding tendencies to define the objects of evaluation by pre-determined objectives and rather linear and mechanistic processes and to study effects primarily as intended changes in individuals. Many evaluators as well as evaluation commissioners are arguing, to the contrary, that initiatives that address persistent, interconnected issues (often called wicked problems) operate within continually changing circumstances influenced by many factors at multiple scales and amidst differing perspectives, values, and interests. For the evaluation field to be relevant and useful to both stakeholders and to the wider public affected by such initiatives, a turn to the systems and complexity fields is needed (Stern, Saunders, & Stame, 2015). 1 However, at present, the significance and implications of this turn are not yet clear (Mowles, 2014; Stern et. al, 2015). The literature includes examinations of various theories and ideas for evaluation (Sanderson, 2000; Barnes et. al, 2003; Davies, 2004; Rogers, 2008; Callaghan, 2008; Hummelbrunner, 2011; Byrne, 2013; Marra, 2015); new ways to conduct evaluations using systems and complexity concepts (Morell, 2010; Patton, 2011; Forss, Marra, and Stern, 2011; Levin-Rozalis, 2014); cases of evaluations that used systems and complexity thinking and techniques (Williams & Imam, 2007; Patton, McKegg, and Weipeihana, 2015; Fredericks, Deegan, & Carman, 2008; Dyehouse, Bennett, Harbor, & Childress, 2009; Morell, Hilscher, Magura, & Ford, 2010; Durland & Fredericks, 2005; Reynolds & Williams, 2011); and literature reviews (Mowles, 2014; Walton, 2014). Additionally, there has been considerable research in neighboring fields, including public health, social services, community psychology, and international aid and development, that addresses the implications of systems and complexity thinking and techniques for designing, implementing, and evaluating policies and programs (e.g., Milstein, 2008; Ramalingham, 2013; Foster-Fishman, Yang, & Nowell, 2007). Much of this research has focused on ‘hard’ (i.e., quantitative, realist) traditions and largely overlooked ‘soft’ (i.e. qualitative, dialogic) and ‘critical’ (i.e., mixed methods, pragmatic) traditions within the systems and complexity fields. There is a need for broad reviews of this research to identify both current implications for evaluation theory and practice and future directions and to explore potential contributions of lesser-examined traditions. 2 Overview of the Three Papers The three papers in this thesis conceptually and empirically examine the implications of borrowing from the systems and complexity fields for evaluation theory and practice. The first paper reviews recent literature on systems thinking and complexity science in evaluation and related