<<

November 2017 Does Money Matter? Hanushek, Jackson, and the Questions We Ask Alanna Bjorklund-Young, Senior Research and Policy Analyst

Does an increase in education funding yield better educational outcomes? This seemingly straightforward question has produced considerable controversy. Prominent researchers, including the sociologist James Coleman and the Eric Hanushek, find no significant relationship between funding and educational outcomes, usually defined as student achievement on standardized tests.1 More recent research, however, shows that increased funding can improve educational outcomes, when that funding is spent upon certain kinds of programs or improvements.

Specifically, “The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms” by C. Kirabo Jackson, Rucker Johnson, and Claudia Persico, exemplifies this second line of work.2 How does this new study fit within the existing literature on the returns to school spending? What were its precise findings? What policy implications follow?

Previous Findings The role of school funding on student outcomes has been contested since the publication of Equality of Educational Opportunity (1966), more commonly known as The Coleman Report.3 The Coleman Report concluded that families are the most important determinant of student achievement, and that the variation in school inputs accounts for very little of the variation in student achievement. The conclusion that increased spending in general seems to have little impact on academic achievement has led some scholars to argue that the structure, or incentives, of the education system needs to change in order for money to really matter for student outcomes.4

Eric Hanushek is a prominent economist whose multiple publications over the past several decades have bolstered the view that spending plays a negligible role in determining academic outcomes (such as student test scores). Hanushek argues that spending on inputs (such as lower student-teacher ratios, the percentage of teachers with a master’s degree, and per-pupil expenditure) has risen in the for the past five decades, without commensurate improvements in student achievement, as shown by stubbornly stable average student-achievement scores on the National Assessment of Educational Progress (NAEP) during this same period of time (see, for example: Hanushek, 2006; 2003).5 Hanushek has also undertaken meta-analyses that examine the relationship between school inputs and student achievement, while taking family inputs

1 More specifically, The Coleman Report found that school-based factors, including resources, explained very little variation in student outcomes, once students’ family characteristics were accounted for. 2 Jackson, Johnson, and Persico, “The Effects of School Spending on Educational and Economic Outcomes.” 3 Coleman and others, “Equality of Educational Opportunity.” 4 See, for example: Hanushek and Lindseth, Schoolhouses, Courthouses, and Statehouses; Roza, Educational . 5 Hanushek, “The Failure of Input-Based Schooling Policies”; Hanushek, “School Resources.”

1

into account.6 In one such review of roughly 400 “studies,”7 he finds that “there is no strong or consistent relationship between variations in school resources and student performance.”8 Hanushek concludes that “input policies have been vigorously pursued over a long period of time, but there is no evidence that the added resources have improved student performance, at least for the most recent three decades when it has been possible to compare quantitative outcomes directly.”9

But other researchers have looked at similar research and drawn different conclusions. For example, Hedges, Laine, and Greenwald10 analyze the same studies as Hanushek11 and conclude, using different methods, that there are “systematic positive relations between resource inputs and school outcomes.”12

How can these two seemly contradictory conclusions be reconciled? To be sure, the studies deploy different methodologies.13 However, the difference in methodology appears to be more fundamentally driven by the different questions the researchers are asking.

Hanushek’s analysis aims to answer the following question: “If education spending is increased, should we expect to see a statistically significant increase in student achievement?” In other words, Hanushek is looking for evidence that increased educational spending usually improves educational outcomes.14

In contrast, Hedges, Laine, and Greenwald’s analysis aims to answer a different question, “Is there evidence that at least one study shows a positive relationship between a given educational resource and educational outcomes?” Hedges, Laine, and Greenwald find that sometimes increased funding improves educational outcomes, whereas sometimes it diminishes them or has no effect whatsoever.15 One benefit of this question is that it prompts researchers to subsequently investigate when increased spending is associated with improved academic outcomes. We explore some of these circumstances below.

Other researchers who operate within this second research question have also found differentiated results. For example, spending money on dual enrollment programs, which allow high school students to take college courses and earn college credits while still in high school, has many significant positive effects on educational outcomes. Multiple rigorous studies16 have found that dual enrollment programs have positive effects on both

6 See, for example: Hanushek, “Throwing Money at Schools”; Hanushek, “The Economics of Schooling”; Hanushek, “The Impact of Differential Expenditures on School Performance.” 7 Hanushek defines a “study” as a “separate estimate of an educational production function found in the literature” (p 142). 8 Hanushek, “Assessing the Effects of School Resources on Student Performance.” 9 Hanushek, “The Failure of Input-Based Schooling Policies.” 10 Hedges, Laine, and Greenwald, “An Exchange.” 11 Hanushek, “The Impact of Differential Expenditures on School Performance.” 12 These authors also expanded the set of studies they analyzed in Greenwald, Hedges, and Lain (1996a) and reached similar conclusions. 13 For example, Hedges, Laine, and Greenwald use a more conservative sample of estimates and combined the findings from studies differently. A more detailed account of the differences between these works can be found in Hedges, Laine, and Greenwald, “Money Does Matter Somewhere”; Hanushek, “Money Might Matter Somewhere”; and Greenwald, Hedges, and Laine, “Interpreting Research on School Resources and Student Achievement.” 14 That is, that estimates from this research are usually statistically significant and positive. 15 These authors also argue that the magnitude of the estimates are important gauges of effect sizes, regardless of their statistical significance. 16 What Works Clearinghouse has identified 5 studies—2 without reservation and 3 with reservation—that meet their group design standards. See U.S. Department of Education, “Transition to College Intervention Report: Dual Enrollment Programs.”

2

high school—such as increased attendance, academic achievement, and completion—and college outcomes— such as increased readiness, access and enrollment, credit accumulation, and attainment.17 Spending on early childhood programs also has important benefits for disadvantaged children. For example, four early childhood education programs—the Perry Preschool Project, the Carolina Abecedarian Project, the Infant Health and Development Program, and the Early Training Project—seem to have had positive effects on students’ IQ scores at an early18 age.19 The Perry Preschool Project and the Carolina Abecedarian Project cause increased high school graduation among females, higher adult income, increased employment among males, and reduced number of arrests.20 In addition, Head Start21 has been shown to increase students’ reading skills.22 Such research suggests that if the increased funding is spent effectively, such as on smaller class sizes in early grades or to fund a high-quality early childhood program for disadvantaged students, the funding can and often does influence student achievement positively.23

New Evidence New research, by C. Kirabo Jackson, Rucker Johnson, and Claudia Persico, entitled “The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms,” offers another example of research that suggests that an increase in funding can improve student outcomes.

This study uses a nationally representative sample of students to examine the effect of increased school spending on long-term student outcomes, including educational attainment,24 high-school graduation, adult wages, adult family income, and incidence of poverty as an adult. In order to isolate the relationship between outcomes and funding, the authors specifically look at the relationship between a certain kind of funding— court-mandated spending—and student outcomes. Their quasi-experimental methods produce plausibly causal estimates.25 These two features of the research—using long-run student outcomes and quasi- experimental methods—are two important methodological features that set this research apart from earlier studies.

In contrast to the most common measure of student outcomes, standardized test scores, Jackson et al. choose to measure the impact of funding on student learning by using long-run student outcomes. While many long- term outcomes are associated with test scores—for example, students with higher test scores are more likely to

17 An, “The Impact of Dual Enrollment on College Degree Attainment”; Giani, Alexander, and Reyes, “Exploring Variation in the Impact of Dual-Credit Coursework on Postsecondary Outcomes”; Struhl and Vargas, “Taking College Courses in High School”; Berger et al., “Early College, Early Success”; Edmunds et al., “Smoothing the Transition to Postsecondary Education.” 18 IQ was measured from the ages of 3 to 7, depending on the program. 19 Elango et al., “Early Childhood Education.” 20 Elango et al. 21 Head Start is a federally-funded national program designed to promote school-readiness in mainly low-income children from birth to 5 years old. 22 Puma et al., Head Start Impact Study. Final Report. 23 Note that how a program is implemented is incredibly important for its success. See Jepsen and Rivkin, Class Size Reduction, Teacher Quality, and Academic Achievement in Public Elementary Schools. For a discussion of the positive and negative effects of California’s sweeping class reduction efforts. 24 I.e., the number of years of schooling. 25 Note that in the absence of a randomized control trial, quasi-experimental methods provide the highest level of research evidence possible. Both methods used in the research, difference-in-differences and instrumental variables, are rigorous quasi-experimental methods. For example, see Schneider et al., Estimating Causal Effects Using Experimental and Observational Design and Meyer, “Natural and Quasi-Experiments in Economics.”

3

graduate from high school—the long-term outcomes themselves are often of most interest for many policy applications. Further, test scores are imperfect measures of students’ true knowledge, and research suggests that they can fail to predict later-life outcomes, such as adult wages.26

In contrast to many earlier papers in this literature, which looked at the correlation between student outcomes and spending,27 this research estimates plausibly causal relationships by using two methods: differences-in- differences (DID) and instrumental variables (IV), which together mimic a randomized control trial (RCT). The DID method relies on the following logic: in order to compare the effect of money on student outcomes, researchers can look at the difference between student outcomes before and after an increase in funding. However, this difference is likely due to both the change in funding (i.e. the “treatment”) and other (unobserved) changes, such as changes in curriculum or other policy changes. Therefore, the difference in outcomes over a given period of time can be compared between two different groups of students: those who received the increase in funding (the treatment group) and a group of students who did not experience funding changes (which serves as a control group). This difference, which is actually a difference-in- differences,28 in the test scores between these two groups therefore gives a more accurate estimate of the effect of the treatment, additional funding, on student outcomes.

The second method, IV, is employed because additional funding (i.e. the treatment) is not typically given to students randomly. Randomization is needed in order to credibly ensure that estimates are causal, as funding changes for many reasons that can also impact student outcomes. Therefore, the researchers use funding changes that, they argue, are unrelated to the specific characteristics of individual students or school districts: court mandated school finance reforms. Using the court mandated funding credibly creates random variation in funding. The combination of these two methods therefore mimics a randomized experiment—by identifying a treatment and control group and introducing random variation in funding.

Using these quasi-experimental methods, Jackson and his coauthors find that, under these circumstances, funding has a significant and positive impact upon important long-term student outcomes. For example, their study finds that increasing per-pupil spending by 10% in each of the 12 years a student is in school: increases the number of years students attend school by 0.3 years, increases wages by 7%, and decreases annual adult poverty by 3.2 percentage points, on average. Further, the study finds that these effects are larger for students from low-income backgrounds. For example, increasing per-pupil funding by 10% for each of the 12 years a student is in school increases educational attainment by 0.5 years for poor students and a statistically insignificant 0.1 years for non-poor students, and high-school-graduation rates are increased by approximately 10 percentage points for poor students while only 2.5 percentage points for non-poor students. These findings suggest, for example, that if school districts spent money on equally effective initiatives, then a 22.7% increase

26 Deming, “Early Childhood Intervention and Life-Cycle Skill Development”; Chetty et al., “How Does Your Kindergarten Classroom Affect Your Earnings?”; Heckman, Pinto, and Savelyev, “Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes.” 27 Note that estimated correlation between spending and student outcomes, which is primarily what the earlier literature on this topic estimated, can be misleading. One reason is confounding factors. For example, a district may receive more funding because of an influx of students in foster care (e.g. students impacted by the opioid crisis). If these students’ outcomes are worse than other students in the district, then the increased funding might appear to decrease student outcomes and the negative correlation between funding and student outcomes might lead policymakers to conclude that money hurts (or does not help) students. 28 This is a “difference-in-differences” because it researchers look at (1) the change over time (i.e. difference) of student test scores in both groups and then (2) the difference between the two groups.

4

in spending would eliminate the average difference in the number of years poor and non-poor students attend school.

Critically, this research reinforces the point that increases in school funding can matter – under particular conditions. What more might be said about these conditions? Jackson et al. (2016) suggest that increased court-mandated spending is more likely be spent on instruction and school support services, than upon other, presumably less effective, reforms. The authors draw this conclusion because they find that a 10% spending increase is associated with roughly a 1.4 increase in school days, a 4% increase in teachers’ base salary, and a 5.7% reduction in teacher-student ratios. Therefore, spending for these purposes, such as reduced class-size, might be more effective for improving student outcomes than spending for other purposes.

In addition to how money is spent, the district circumstances may also be important to these findings. The districts that received additional funding due to the court-mandated reforms had common characteristics: they were in relatively poorer areas in comparison to other districts in their state, for example, and had subsisted on lower per-pupil spending29 before the reforms. This specific context—having a certain kind of district that receives permanent unforeseen funds that did not go towards necessities—might create special circumstances in which districts choose to spend money more productively.

Jackson et al. have thus used a novel context and well-applied rigorous methods to advance what we know about school funding. Namely, the authors have shown that increased school funding can have important educational impacts on students in a meaningful way. Further, they find suggestive evidence that funding for specific items, such as increased instruction, better support services, decreased teacher-student ratios, longer school days, and increased teachers’ salaries, can be an effective way to boost student achievement under certain circumstances.

Policy Implications In light of what we know from previous literature and the new results found in Jackson et al., what are the policy implications of this work? There has been some debate about this. Hanushek argues that the estimates in Jackson et al. are unrealistic, given the roughly 150% increase in education funding from 1970 to 2010 without large changes in student achievement.30 Jackson et al. counter that there have been important changes in student outcomes during this time, notably large improvements in NAEP scores among disadvantaged children from poor urban areas; increased high school completion rates among Hispanic and black students; and decreased drop-out rates, for example. The authors also point out that the increase in education funding did not exist within a vacuum; there were many other factors that changed during this time period, including an increase in the concentration of poverty, the rise of single-parent families, and mass incarceration.

It is perhaps more important to focus on methods in this debate, however.31 While Hanushek notes that the methods used in this paper include many details that could go awry, he doesn’t point to any specific methodological problems in the Jackson paper. To be sure, this is a complex paper.32 However the methodology used, while cleverly applied, is well known and stands up to the highest standards of peer- reviewed scrutiny. Therefore, instead of questioning whether the results appear accurate, policymakers should perhaps turn their attention towards better understanding what specific educational interventions matter, in

29 In comparison to other districts in their state. 30 Hanushek, “Money Matters After All?”; Hanushek, “Not in the Right Ballpark.” 31 Jackson, Johnson, and Persico, “Money Does Matter After All.” 32 Hanushek, “Money Matters After All?”

5

which circumstances, and at what cost. In short, the Jackson et al. paper, while important in showing that additional school funding can be productive, doesn't tell us why the money mattered in the cases they studied – but only that it did. Put differently, Jackson et al.’s research offers suggestive evidence on the productive use of school funding under certain circumstances. Whether their specific findings are transferable to other contexts – i.e., to the creation or revision of a state funding formula – is as yet unclear.

Instead, the most important implication may well be the additional evidence it provides that money can be effectively spent on behalf of stronger student outcomes. This research demonstrates the need for additional studies that identify the efficacy (or lack thereof) and cost-effectiveness of increased funds across a wide range of educational circumstances. This additional research would help answer the question that we ultimately want to answer: What is the best use of educational resources?

Citations:

An, Brian P. “The Impact of Dual Enrollment on College Degree Attainment: Do Low-SES Students Benefit?” Educational Evaluation and Policy Analysis 35, no. 1 (2013): 57–75.

Berger, Andrea, Lori Turk-Bicakci, Mich Garet, M. Song, J. Knudson, C. Haxton, K. Zeiser, G. Hoshen, J. Ford, and J. Stephan. “Early College, Early Success: Early College High School Initiative Impact Study.” Washington, DC: American Institutes for Research, 2013.

Chetty, Raj, John N. Friedman, Nathaniel Hilger, Emmanuel Saez, Diane Whitmore Schanzenbach, and Danny Yagan. “How Does Your Kindergarten Classroom Affect Your Earnings? Evidence from Project STAR.” The Quarterly Journal of Economics 126, no. 4 (2011): 1593–1660.

Coleman, James S., and others. “Equality of Educational Opportunity.,” 1966. https://eric.ed.gov/?id=Ed012275.

Deming, David. “Early Childhood Intervention and Life-Cycle Skill Development: Evidence from Head Start.” American Economic Journal: Applied Economics 1, no. 3 (2009): 111–134.

Edmunds, Julie A., Fatih Unlu, Elizabeth Glennie, Lawrence Bernstein, Lily Fesler, Jane Furey, and Nina Arshavsky. “Smoothing the Transition to Postsecondary Education: The Impact of the Early College Model.” Journal of Research on Educational Effectiveness 10, no. 2 (2017): 297–325.

Elango, Sneha, Jorge Luis García, James J. Heckman, and Andrés Hojman. “Early Childhood Education.” National Bureau of Economic Research, 2015. http://www.nber.org/papers/w21766.

Giani, Matthew, Celeste Alexander, and Pedro Reyes. “Exploring Variation in the Impact of Dual-Credit Coursework on Postsecondary Outcomes: A Quasi-Experimental Analysis of Texas Students.” The High School Journal 97, no. 4 (2014): 200–218.

Greenwald, Rob, Larry V. Hedges, and Richard D. Laine. “Interpreting Research on School Resources and Student Achievement: A Rejoinder to Hanushek.” Review of Educational Research 66, no. 3 (1996): 411–416.

Hanushek, Eric A. “Assessing the Effects of School Resources on Student Performance: An Update.” Educational Evaluation and Policy Analysis 19, no. 2 (1997): 141–164.

6

———. “Money Matters After All?” Education Next, July 17, 2015. http://educationnext.org/money-matters- after-all/.

———. “Money Might Matter Somewhere: A Response to Hedges, Laine, and Greenwald.” Educational Researcher 23, no. 4 (1994): 5–8.

———. “Not in the Right Ballpark.” Education Next, July 20, 2015. http://educationnext.org/not-right- ballpark/.

———. “School Resources.” Handbook of the Economics of Education 2 (2006): 865–908.

———. “The Economics of Schooling: Production and Efficiency in Public Schools.” Journal of Economic Literature 24, no. 3 (1986): 1141–1177.

———. “The Failure of Input-Based Schooling Policies.” The Economic Journal 113, no. 485 (2003). http://onlinelibrary.wiley.com/doi/10.1111/1468-0297.00099/full.

———. “The Impact of Differential Expenditures on School Performance.” Educational Researcher 18, no. 4 (1989): 45–62.

———. “Throwing Money at Schools.” Journal of Policy Analysis and Management 1, no. 1 (1981): 19–41.

Hanushek, Eric A., and Alfred A. Lindseth. Schoolhouses, Courthouses, and Statehouses: Solving the Funding- Achievement Puzzle in America’s Public Schools. Princeton University Press, 2009. https://books.google.com/books?hl=en&lr=&id=vNSO- 6aePGIC&oi=fnd&pg=PR1&dq=hanushek+school+houses&ots=mh2EiKngKq&sig=M6MJ4QhEXMHEw sPhBUXUXiEbdmM.

Heckman, James, Rodrigo Pinto, and Peter Savelyev. “Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes.” The American Economic Review 103, no. 6 (2013): 2052–2086.

Hedges, Larry V., Richard D. Laine, and Rob Greenwald. “An Exchange: Part I*: Does Money Matter? A Meta-Analysis of Studies of the Effects of Differential School Inputs on Student Outcomes.” Educational Researcher 23, no. 3 (1994): 5–14.

———. “Money Does Matter Somewhere: A Reply to Hanushek.” Educational Researcher 23, no. 4 (1994): 9– 10.

Jackson, C. Kirabo, Rucker C. Johnson, and Claudia Persico. “Money Does Matter After All.” Education Next, July 17, 2015. http://educationnext.org/money-matter/.

———. “The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms.” The Quarterly Journal of Economics 131, no. 1 (2015): 157–218.

Jepsen, Christopher, and Steven Gary Rivkin. Class Size Reduction, Teacher Quality, and Academic Achievement in California Public Elementary Schools. Public Policy Instit. of CA, 2002. http://wwwww.ppic.org/content/pubs/report/R_602CJR.pdf.

Meyer, Breed D. “Natural and Quasi-Experiments in Economics.” Journal of Business & Economic Statistics 13, no. 2 (1995): 151–161.

7

Puma, Michael, Stephen Bell, Ronna Cook, Camilla Heid, Gary Shapiro, Pam Broene, Frank Jenkins, et al. Head Start Impact Study. Final Report. Administration for Children & Families, 2010. https://eric.ed.gov/?id=ED507845.

Roza, Marguerite. Educational Economics: Where Do School Funds Go?. ERIC, 2010. https://eric.ed.gov/?id=ED511714.

Schneider, Barbara, Martin Carnoy, Jeremy Kilpatrick, William H. Schmidt, and Richard J. Shavelson. Estimating Causal Effects Using Experimental and Observational Design. American Educational & Reseach Association, 2007.

Struhl, Ben, and Joel Vargas. “Taking College Courses in High School: A Strategy Guide for College Readiness–The College Outcomes of Dual Enrollment in Texas.” Jobs for the Future, 2012.

U.S. Department of Education, Institute of Education Sciences. “Transition to College Intervention Report: Dual Enrollment Programs.” What Works Clearninghouse, February 2017. https://ies.ed.gov/ncee/wwc/Intervention/1043.

8