White Paper #3 Evaluating the Evaluator: Unpacking Charity Navigator’s Rating System Summary Charity Navigator (CN) operates a system for evaluating U.S. public charities, with the purpose of helping donors make informed decisions. In this paper we summarize the organization’s rating process and dissect its evaluation methodology. We find that while CN offers a needed standard for nonprofit accountability and has potential to fulfill its mission in the future, its current system has substantial deficiencies that can misinform donors and harm nonprofits. Major problems include the composition of the Financial Health Rating, which is not a valid gauge of nonprofits’ financial health. We offer recommendations to donors on alternative research methods, to nonprofits and management consultants on presenting informative data to the public, and to CN on constructive improvements to its system. CN issued revisions to its methodology in June, 2016, subsequent to the first edition of this paper. We address these changes in an addendum on page 14 here. The revisions do not substantively alter this paper’s findings or recommendations. pimgconsulting.com | (206) 282-7464 Introduction How can we determine if a public charity is effective and responsible in carrying out its work? This is an important question. There are over one million public charities in the U.S.1 These organizations are authorized by the federal government, serve a wide range of functions in our society and account for the bulk of the nonprofit sector, which constitutes over 5% of the U.S. economy.2 Taxpayers subsidize charitable organizations in two ways: public charities are generally exempt from federal taxation and many receive charitable contributions, which are often tax-deductible for donors. The public has a clear interest in ensuring that nonprofits fulfill their purposes. Charity Navigator (CN) is a nonprofit organization that describes itself as “America’s leading independent charity evaluator.”3 Founded in 2001, the organization’s mission statement is: “Charity Navigator works to guide intelligent giving. By guiding intelligent giving, we aim to advance a more efficient and responsive philanthropic marketplace, in which givers and the charities they support work in tandem to overcome our nation’s and the world’s most persis- tent challenges.”4 CN uses a set of calculations to rate nonprofits on several criteria, culmina- ting with familiar “star” ratings on aspects it has identified as keys to donor understanding. To the extent that individuals and institutions use CN’s ratings to influence their charitable 6 donation decisions, the organization’s methods can influence resource distribution. As donors get increasingly savvy about available research tools, more nonprofit leaders are concerned about their CN ratings and the perceptions these ratings may fuel among potential suppor- ters. It is therefore worthwhile to examine CN’s methodology and inquire about its validity in addressing its objectives. In this paper, we evaluate the evaluator. We examine CN’s rating system and assess its strengths, weaknesses and overall relevance to the cause of promoting an effective nonprofit sector. On pages 2-3, we establish context for CN’s work with brief groundwork on performance measurement. On pages 4-6 we summarize CN’s evaluation system. We then present detailed analysis of the system’s strengths and weaknesses, on pages 7-11. Finally, on 2016 © GROUP MANAGEMENT INTEREST pages 12-13 we offer recommendations on actions CN and its users can take to improve Evaluating the Evaluator: Unpacking Charity Navigator's Rating System PUBLIC INTEREST MANAGEMENT GROUP © 201 © GROUP MANAGEMENT INTEREST PUBLIC data and communication in the nonprofit sector, looking forward. PUBLIC 1 National Center for Charitable Statistics, http://nccs.urban.org/statistics/quickfacts.cfm 1 2 Independent Sector, http://www.independentsector.org/economic_role, 2014 3 We use “CN” for brevity; Charity Navigator does not generally use this acronym in its communications. 4 Charity Navigator, http://www.charitynavigator.org/index.cfm?bay=content.view&cpid=17 Performance Metrics from a Donor’s Perspective In recent decades, the use of “measurable outcomes” has become a pronounced trend in the nonprofit sector, driven largely by funders interested in seeing the results of their support. To understand where to best direct funds, a donor may logically want to know both whether a nonprofit is producing good outcomes and how well it is run. Once an individual or institutional donor has established interest in a nonprofit’s work, there are three basic questions they should answer affirmatively before writing a check. Building on our opening query on page one, 1. Is the organization effective in conducting its work? 2. Is it managed with integrity and competence? 3. Should I direct money to this particular entity, given other alternatives? Outcome measurement in the nonprofit sector has traditionally focused on program evaluation, assessment of organizations’ mission-related work. Program evaluation methods have varied widely, without clear standards in many fields of nonprofit work. The difficulty of measuring mission impact is one complicating factor for many nonprofits. The availability of funding for evaluation of short- and long-term outcomes can be another obstacle. Thus, answering Question 1 is not always straightforward. Management practices of nonprofits can also vary widely from one organization to another. ng Charity Navigator's Rating System Donors support specific organizations, not causes or social needs in the abstract, so under- standing management performance is crucial. This endeavor brings its own challenges. In contrast to businesses, which are typically judged on their profitability relative to competitors, nonprofits exist to deliver societal value, and face a higher level of public accountability. While financial results are important, profitability is inadequate as a catch-all metric for non- profit management strength, and answering Question 2 is also a nuanced process. Addressing the first two questions requires standardized information on organizational perfor- mance and clear methodology to produce this data. Question 3 is comparative, and may INTEREST MANAGEMENT GROUP © 2016 © GROUP MANAGEMENT INTEREST involve consideration of data on multiple nonprofits and/or other possible uses of funds. valuating the Evaluator: Unpacki Statisticians and researchers have tackled many similarly challenging tasks. E PUBLIC PUBLIC Consider, for example, the medical problem of determining, “Is this person healthy?” To investigate, health professionals will take various measurements. Some metrics, such as age, height and weight, are descriptive, and give context to an evaluator. Many metrics, such as cholesterol or insulin levels in a patient’s blood, are tests for specific problems. Others, such as blood pressure, heart rate and temperature, are general indicators of either the presence or absence of any of a wide range of potential concerns. In statistics, the latter set of metrics are proxy variables – they’re not especially interesting in and of themselves, but are assumed to correlate highly with other characteristics that are important. For example, if the victim of an accident has normal and stable vital signs and shows no clear indications of serious injury, 2 she may be sent home by medics rather than hospitalized. Proxy variables are employed widely in evaluation—we use data that are readily available (e.g. vital signs), under the presumption that they will, in essence, tell us what we really want to know (i.e. the presence or absence of an injury). The potentially messy problem of assessing the health of a person or a nonprofit organization requires the use of proxies. There is always risk in using proxies; the patient in our example above may have a significant problem that was not detected by measurements at the scene of the accident. And so performance measurement becomes an inexact science. But it is science nonetheless—that is, it must be if we want a systematic way to answer a donor’s basic questions. Performance measurement has several potential pitfalls: Miscasting the problem If an evaluator asks the wrong fundamental questions or designs the task in ways that don’t truly address the right questions, evaluation will fail, regardless how well it is executed. Choosing the wrong metrics Even with sound design, the mechanics of evaluation can go awry if the chosen metrics are not accurate proxies of the information we really want to know. Making the task overly simple or complex 6 An overly simple measurement system can miss important nuances. A complex system can become burdensome to operate, and may produce unintended consequences. Sound evaluation design resides in between—it should be agile to operate and straightforward to interpret, yet nuanced enough to provide depth of understanding. Using inaccurate, incomplete or inconsistent data Even well-conceived evaluation systems will be derailed by unreliable data. The data used to analyze performance should be unbiased, collected through a clear and consistent process, and directly comparable within the studied population. INTEREST MANAGEMENT GROUP © 2016 © GROUP MANAGEMENT INTEREST In the context of multiple challenges and potential pitfalls, Charity Navigator has undertaken Evaluating the Evaluator: Unpacking Charity Navigator's Rating System PUBLIC INTEREST MANAGEMENT GROUP © 201 © GROUP MANAGEMENT INTEREST PUBLIC the task
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-