<<

Journal of Economic Perspectives—Volume 30, Number 3—Summer 2016—Pages 57–84

What Can We Learn from Charter School Lotteries?

Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos

ublicly funded charter schools, which set their own curriculum, financial management, and staffing, were originally designed as testing grounds for P trying out new and innovative approaches for improving academic achieve- ment. From the first few charter schools started in Minnesota in 1993 with a few dozen students, enrollment has increased to about three million across 7,000 schools (National Center for Education Statistics 2015), which is more than 5 percent of all public elementary and secondary students in the country. In some large urban districts, like Indianapolis, Philadelphia, Detroit, and Washington, DC, more than 30 percent of students attend charter schools. In the 2014–2015 school year, the New Orleans Recovery School District became the first US district to be comprised entirely of charter schools (National Alliance for Public Charter Schools 2015a; Abdulkadiroglu, Angrist, Hull, and Pathak 2016). All charter schools are free to students. Anyone residing in a given geography (which, depending on state law, would be the district, region, or state where the charter school is located) is eligible to attend. Increasingly, however, applicants

■ Julia Chabrier is a Policy Manager, Abdul Latif Jameel Poverty Action Lab (J-PAL), Massa- chusetts Institute of Technology, Cambridge, Massachusetts. Sarah Cohodes is Assistant Professor of Education and Public Policy, Teachers College, Columbia University, New York City, New York. Philip Oreopoulos is Professor of Economics, University of Toronto, Toronto, Canada. He is also Faculty Research Associate, National Bureau of Economic Research, Cambridge, Massachusetts, and Co-director, Canadian Institute for Advanced Research, Toronto, Canada. Their email addresses are [email protected], cohodes@ tc.columbia.edu, and [email protected]. † For supplementary materials such as appendices, datasets, and author disclosure statements, see the article page at http://dx.doi.org/10.1257/jep.30.3.57 doi=10.1257/jep.30.3.57 58 Journal of Economic Perspectives

exceed the spots available. When faced with too many applicants, charters must admit students by lottery. Systematic evidence on what share of charters are oversub- scribed is scant, but the authors of a national evaluation of charter school impacts estimated that about 26 percent of charter middle schools were likely to be over- subscribed in the 2006–2007 school year (Gleason, Clark, Clark Tuttle, Dwoyer, and Silverberg 2010; see also Clark Tuttle, Gleason, and Clark 2012). However, in disad- vantaged urban neighborhoods, some charter schools admit fewer than 20 percent of the applicants. Lotteries are sometimes held in large auditoriums in front of anxious parents and children, leading to heartbreaking scenes of disappointment like those in the 2010 documentary, Waiting for Superman. Lottery losers often must default back to attending some of the worst performing schools in the country.1 To remove the incentive for parents to apply separately to multiple schools and to maximize the number of students who get into at least one school, a few school districts now centralize the lottery process, often using mechanisms that draw upon 2012 Nobel prize-winner Alvin Roth’s work on market design. Results from the most recent District of Columbia’s common lottery provide an indicator of oversubscribed demand: of the 17,000 students that entered the unified lottery, 71 percent of students received an offer from at least one school on their list, but only 60 percent received an offer from one of their top three choices (as reported in Brown 2014). Charter school authorizers, as designated by state law, choose which charters to grant, provide ongoing oversight of charter schools, and make renewal decisions at the end of the charter contract term (typically every five years). Charter schools are allowed to operate with a degree of autonomy from some of the rules and regula- tions governing traditional public schools, and so those who want to start a charter school typically must submit a lengthy application, including a mission or statement about what will differentiate their proposed school. Decisions about whether to renew are often based on relative test score measures or financial health (including enrollment). Schools do close—sometimes suddenly—compelling students to find another charter school option or revert back to their local traditional public school. For example, about 3 percent of all charter schools closed in 2014 (National Alliance for Public Charter Schools 2015b, p. 2). In Texas and North Carolina, respectively, Baude, Casey, Hanuskek, and Rivkin (2014) and Ladd, Clotfelter, and Holbein (2015) conclude that charters that close are disproportionately less effective, while those that remain open improve in value-added over time. The required process of random assignment for charter schools with too many applicants can bring worry and letdown for lottery participants, but it also generates an opportunity for research. Over the past decade, a number of studies have been able to gather data from lottery results and match them to administrative records to allow for rigorous evaluation of the impact of charter school attendance on student outcomes. Most of these studies look at 3 to 30 schools at a time. The results show wide dispersion. Some charter schools are estimated to increase performance on

1 For examples of oversubscribed demand at popular charter schools in Baltimore, see Wiltenburg (2015); for examples in New York City, Chapman and Brown (2014); for examples in Massachusetts, Pisano (2015); for examples from in Houston, Rahman (2015). Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 59

state-required tests (especially math scores) by more than half a standard devia- tion per year of attendance, while others are estimated to have substantial negative effects. The estimates are often imprecise, with large standard errors. In this paper, we look at the results from the research on charter schools which has taken advantage of evidence from lotteries and also take a more in-depth look at school-level differences. We do not attempt to answer the controversial question of whether more (or fewer) charter schools would benefit students, on average, since lottery studies are limited by the fact that they examine only schools that are oversub- scribed and do not examine impacts for students who do not apply (for a discussion of different sides of this debate, see the website “Charter Schools in Perspective”). Rather, our intent is to ask which charter schools benefit which kinds of students. In so doing, we hope to learn what sorts of activities happening at successful charters might be worthwhile expanding into other schools. A general conclusion emerging from the previous literature, which we will discuss more in this paper, is that the distinguishing feature of the charter schools with the largest positive effects is their adoption of an intensive “No Excuses” approach with strict and clear disciplinary policies, mandated intensive tutoring, longer instruction times, frequent teacher feedback, and a relentless effort to help all students. These factors need not be exclusive to charter schools: for example, Fryer (2014, 2016) offers evidence that reinventing traditional public schools in urban settings to have these characteristics can lead to similarly large performance improvements. In line with the earlier literature, we also find that schools that have adopted a No Excuses approach are correlated with large positive effects on academic performance. However, we find that No Excuses schools are concentrated in urban neighborhoods with very poor-performing schools and are scarce in nonurban areas. Thus one reason for the large effects achieved by No Excuses urban schools is that fallback public schools for urban students have such poor performance. Neal (2009) makes a similar point that private school returns are largest for urban minority students. Once the performance levels of fallback schools are taken into account, and we look at the indi- vidual components of a No Excuses approach using charter school level data, we find that intensive tutoring is the only characteristic that remains significant in improving student performance. Tutoring offered at charter schools is typically more intense than tutoring offered at traditional public schools. Charter schools often use paid tutors, add tutoring on top of already long school days, and require all students to participate. This finding about the importance of tutoring is in line with other recent evidence pointing to dramatic effects from intensive tutoring on its own, suggesting a good place to start for effective and practical reform at traditional public schools.

Lottery Studies of Charter Schools

When the first charter school legislation was enacted in 1991 in Minnesota, the law specified that oversubscribed schools would be filled by lottery ( Junge 2014), although some states allow charters to give preference to certain students, such as siblings, children of employees, or educationally disadvantaged students (National 60 Journal of Economic Perspectives

Alliance for Charter Public Schools 2015c). We know of 16 studies of charter schools that have used lotteries as a way to draw conclusions about their efficacy. Some of these studies also include results using a matching on observables approach, which we consider less-convincing; for the purpose of this paper, we focus on the lottery- based findings. First, we sketch how such lottery studies are conducted and then review the results.

The Methodology of Lottery Studies In broad terms, the methodology of these studies is to compare those who won a charter school lottery with those who did not. Of course, complexities arise. One chal- lenge is that researchers must take into account that not all winners attend charter schools and not all losers end up at traditional public schools. In Boston, for example, Abdulkadiroglu, Angrist, Dynarski, Kane, and Pathak (2011) find one-fifth of lottery winners never attend a charter school and some lottery losers eventually end up in one (by moving off a waitlist, entering a future admissions lottery, or gaining sibling preference when a sibling wins the lottery). Therefore, in most studies of how charter schools affect test scores, researchers measure the effects in two stages, first estimating how winning a lottery predicts increased attendance at charter schools and, second, estimating how this predicted increased attendance affects achievement.2 Because effects of attending a charter school are identified based on differences between initial lottery winners and losers, selection in who enrolls or persists in charter schools does not bias the causal estimates. While this approach addresses internal validity, external validity concerns may arise if the potential impact of charters is weaker for those who do not apply (but would have gotten in had they done so). Fixed effects are usually added to the estimating equation for each group of students that applied to the same set of school lotteries to ensure that winner–loser comparisons are between those who had an equal chance of being selected (to the set of schools they applied). In many cases, test score data from different grade levels are stacked together, implicitly assuming that attendance effects increase equally for each year spent in a charter school versus not. Pooling data from multiple test results while clustering standard error estimates by grouping at the student level may also help increase precision.

An Overview of the Studies We summarize lottery-based charter school research in Table 1. The studies described in Table 1 do not include all charter schools that have held lotteries. To do research on outcomes of winners and losers in a charter school lottery;

2 In other words, winning a charter school lottery is used as an instrumental variable for charter school attendance. Conceptually, researchers estimate the “intention-to-treat” (ITT) effect of winning a lottery for a charter school seat on the outcome of interest (for example, student test scores) by calculating the difference in average outcomes between lottery winners and losers. The “local average treatment effect” (LATE) of charter school attendance on the outcome of interest is calculated by scaling up the ITT estimate by the difference in charter school attendance between lottery winners and lottery losers (this is sometimes called the treatment on the treated (TOT) when no or few lottery losers gain entry to charter schools). What Can We Learn from Charter School Lotteries? 61

Table 1 Summary of Lottery-Based Charter School Estimates of Reading and Math Test Score Impacts

Two-stage least squares impacts of per-year charter attendance (all effects significant at Setting Sample Paper 5% level unless otherwise noted) (1) (2) (3) (4)

Massachusetts Boston (8 schools) Abdulkadiroglu, Angrist, MS: 0.198 sd ELA, 0.359 sd math Dynarski, Kane, and Pathak HS: 0.265 sd ELA, 0.364 sd math (QJE, 2011) Boston (13 schools) Cohodes, Setren, Walters, MS: 0.138 sd ELA, 0256 sd math Angrist, and Pathak (Boston HS: 0.271 sd ELA, 0.354 sd math Foundation, 2013) Massachusetts Angrist, Pathak, and Walters MS: 0.075 sd ELA, 0.213 sd math (26 schools) (AEJ: Applied Economics, 2013) HS: 0.206 sd ELA, 0.273 sd math KIPP Lynn Angrist, Dynarski, Kane, MS: 0.133 sd ELA, 0.352 sd math Pathak, and Walters ( JPAM, 2012) UP Academy Charter Abdulkadiroglu, Angrist, MS: 0.118 sd ELA, 0.270 sd math School of Boston Hull, and Pathak (NBER Working Paper, 2014) National 15 states (36 schools) Gleason, Clark, Clark Tuttle, MS: – 0.04 sd reading, – 0.04 sd math (not Dwoyer, and Silverberg significant). † Year 2 impacts divided by 2 (2010) to get a per-year estimates KIPP schools (24 Clark Tuttle, Gleason, ES: 0.11 sd on letter-word identification schools) Knechtel, Nichols-Barrer, and 0.10 sd on passage comprehension Booker, Chojnacki, Coen, test in reading, 0.14 sd on calculation, and Goble (Mathematica 0.02 sd (not significant) on applied Policy Research, 2015) problems in math. From study- administered Woodcock-Johnson exam. † Year 3 impacts divided by 3 to get a per-year estimate MS: 0.08 sd reading, 0.12 sd math. † Year 2 impacts divided by 2 to get a per- year estimate KIPP middle schools Clark Tuttle, Gill, Gleason, 0.08 reading (not significant), 0.18 math. (12 schools) Knechtel, Nichols-Barrer, † Year 2 impacts divided by 2 to get a per- Resch (Mathematica Policy year estimate Research 2013) Charter schools that Furgeson, Gill, Haimson, Intention-to-treat estimates: MS/HS: were members of Killewald, McCullough, –0.02 reading (not significant), charter management Nichols-Barrer, Teh, –0.05 math (not significant). organizations in 14 Verbitsky-Savitz, Bowen, states (16 schools Demeritt, Hill, and Lake in 6 sites; estimates (Mathematica Policy aggregated by site) Research 2012) New York City New York City Hoxby, Murarka, Kang ES/MS: 0.09 sd ELA, 0.12 sd in math (42 schools) (2009) HS: 0.18 sd ELA, 0.19 sd math New York City Dobbie and Fryer ES: 0.058 sd ELA, 0.113 sd math (29 schools) (AEJ: Applied Economics, 2013) MS: 0.048 ELA (not significant), 0.126 math Harlem Children’s Dobbie and Fryer (JPE 2015) 0.031 sd (not significant) reading, Zone Promise Academy 0.075 sd math. From study-administered middle school Woodcock-Johnson exam. Harlem Children’s Dobbie and Fryer ES: 0.114 sd ELA (not significant), Zone Promise Academy (AEJ: Applied Economics, 0.191 sd math (not significant) middle and elementary 2011) MS: 0.047 sd ELA (not significant), schools 0.229 sd math

(continued) 62 Journal of Economic Perspectives

Table 1 (continued) Summary of Lottery-Based Charter School Estimates of Reading and Math Test Score Impacts

Two-stage least squares impacts of per-year charter attendance (all effects significant at Setting Sample Paper 5% level unless otherwise noted) (1) (2) (3) (4)

Chicago Chicago International Hoxby and Rockoff No significant impacts on math or Charter School schools (Unpublished paper, 2004) reading (dependent variable is percentile (3 schools) score on Iowa Test of Basic Skills) Unknown Anonymous No Hastings, Nielson, 0.346 sd reading, –0.092 sd math (not Excuses charter schools Zimmerman (NBER Working significant), estimates are a mix of run by prominent Paper, 2012) different years CMO in mid-sized urban school district (4 schools) Washington, DC SEED School Curto and Fryer ( JLE, 2014) 0.211 sd reading, 0.229 sd math Notes: This table only includes studies that use charter school lotteries to estimate effects on test scores. Some of these studies also include or focus on observational results, which are not reported here. In some cases where there are multiple studies of the same setting, we focus on published academic studies, adding studies when it appears that a substantial number of additional schools have been added. All impacts are second stage estimates reported in standard deviations and are statistically significant unless noted otherwise. Citations in boldface type indicate that this study contributes to the analyses presented in this paper. See Appendix Table 1 for more details on the studies indicated in boldface. ES = elementary school, MS = middle school, HS = high school, sd = standard deviation, ELA = English/language arts, CMO = charter management organization.

records must be in suitable condition; enough time must elapse to observe student outcomes of interest; researchers must obtain permission from schools to work with their lottery records; and, because of federal privacy law, the matching of lottery records to student test scores often requires either individual consent from study participants or collaboration with state or school district administrators who can conduct or supervise the match. In cases of multiple studies working with the same data or location, we focus here on the most recent published academic study or report, or if not that is not available, the most recent unpublished study. In some cases in the discussion that follows, we will rescale the estimates of charter school effects to be comparable across studies.3 Hoxby and Rockoff (2004) collected admissions lottery data from three No Excuses–style Chicago International Charter Schools (CICS), which deliberately

3 More specifically, in cases where a study reported only the intention-to-treat effect (the outcome effect from winning a lottery) and no first stage estimate (the effect of winning a lottery on attendance), we noted this in Table 1. If the first stage and intention-to-treat are reported but a local average treatment effect is not, we divide by the best estimate of the first stage. In cases where a study reported only cumulative estimates, we divided the final year estimate by the number of years observed to obtain a per-year estimate. When we convert estimates to per year or second stage estimates, we also divide the standard errors by the same factors we divide the coefficients. In the cases where we are converting intention-to-treat estimates to second stage estimates, this will not correct the standard errors as a typical two-stage least squares proce- dure would in a statistical software program. Thus our standard errors are likely slightly too small for a subset of the charter school impact estimates that are based on intention-to-treat estimates—those from the Knowledge is Power Program (KIPP) (Clark Tuttle et al. 2013) and charter management organization (Furgeson et al. 2012) studies. We follow these conventions in our data analysis as well. Means and standard deviations are weighted by the inverse of the standard error of the relevant point estimates, both here and throughout our study. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 63

locate in disadvantaged urban communities to target low-income families. Hoxby and Rockoff had admissions lottery data matched to Chicago Public School admin- istrative data on test score outcomes. They find small positive changes due to charter school attendance, not statistically significant at standard levels. Around the same time as Hoxby and Rockoff’s study, another team of econ- omists began collecting charter school lottery data from Massachusetts and, with support from state officials, obtained access to administrative public school data for matching. Abdulkadiroglu et al. (2011) focus on students residing in Boston prior to applying to at least one of five charter middle schools or one of three charter high schools where high demand cause the schools to be oversubscribed. They find very large average effects: charter school attendance increases state-level English/ language arts and math performance test scores by 0.2 and 0.35 standard deviations per year respectively. Given that that the achievement gap between black and white students in Massa- chusetts is about 0.7 to 0.8 standard deviations, these estimates suggest that three years of charter school attendance for blacks would eliminate the black-white perfor- mance gap. Angrist, Pathak, and Walters (2013) update this analysis to include urban and nonurban schools across Massachusetts, along with additional years of test score data. They continue to find positive average charter school effects on test scores, but these effects appear in urban schools only and with wide variance across schools—a finding we revisit later in this paper. The New York City Department of Education also facilitated the matching of charter school lottery data with standardized test scores in English/language arts and math. Dobbie and Fryer (2013) collected data from 19 elementary and 10 middle schools that were oversubscribed. They also find that charter school atten- dance increases test scores, especially for math scores, though again with large variance across schools. In an earlier lottery-based study of New York City charter schools, Hoxby, Muraka, Kang (2009) also found large and significant results for middle schools and report even larger positive effects for charter high schools. Studies that use survey data for national samples of charter schools tend to find positive but not statistically significant overall impacts. Both Gleason et al. (2010) and Furgeson et al. (2012) contacted charter schools asking for permission to survey lottery applicants and obtain consent prior to randomization. The Furgeson et al. group also collected retrospective data to match directly with administrative data. Among the 77 charter middle schools that agreed to participate in Gleason et al. (2010), only 36 ended up with a large enough waiting list to use in their study. On average, lottery winners performed no better and no worse in math and reading scores than lottery losers two years after students applied, though as in Massachusetts, urban charters outperformed nonurban ones. Furgeson et al. (2012) identified 16 charter schools (of 109 schools run by charter management organizations) with adequate records and also find insignificant overall test score effects from winning the lottery. Estimates from survey data, however, are generally more imprecise than those using administra- tive data. Seven additional lottery-based studies estimate charter impacts for specific schools or organizations. Three of these studies examine the Knowledge Is Power 64 Journal of Economic Perspectives

Program (KIPP) charter schools. KIPP is the largest network of charter schools in the country and is often described as the source of the No Excuses movement (as reported in Rotherham 2011). In KIPP schools, principals and teachers have high behavioral and academic expectations for all students. Further, parents, students, and teachers sign a “learning pledge” and follow a strict disciplinary code. School hours are extended typically to between 7:30AM and 5:00PM and include occasional Saturdays and summer weeks, and tutoring is also offered during these times. In the 2014–2015 school year, KIPP’s network included 162 schools serving 58,495 students in prekindergarten through grade 12 (Clark Tuttle et al. 2015, xiii). All three KIPP lottery studies listed in Table 1 find significant positive charter attendance effects on achievement (Angrist, Dynarski, Kane, Pathak, and Walters 2012; Clark Tuttle et al. 2013; Clark Tuttle et al. 2015). In addition to the test score results, Clark Tuttle et al. (2013) also find that KIPP attendance increases the amount of homework per night by about 45 minutes and increases school satisfaction but does not affect effort or engagement. The Promise Academy charter schools in the Harlem Children’s Zone (HCZ) contain many similar No Excuses elements. Dobbie and Fryer (2011) estimate that attendance at the Promise Academy raises test scores by about 0.20 standard devia- tions per year, although effects on English/language arts were not significant. The study also finds that attendance at the Promise Academy reduces absenteeism. Two other charter schools aligned with the No Excuses model have been evaluated. The Unlocking Potential (UP) Network focuses on in-district school turnaround for chronically underperforming schools. In 2011, UP Academy Charter School of Boston replaced a failing traditional public school in Boston; within a year, the school was required to hold a lottery to address oversubscrip- tion (as reported in Nix 2015). Abdulkadiroglu, Angrist, Hull, and Pathak (2016) find lottery-based UP attendance effects of 0.12 standard deviations per year for English/language arts scores and 0.27 standard deviations for math. SEED schools are No Excuses boarding schools in Baltimore and Washington, DC, for students from disadvantaged backgrounds in grades 6 through 12. At the Washington, DC, school, Curto and Fryer (2014) estimate increases in math scores of 0.23 standard deviations and reading scores of 0.21 standard deviations per year of attendance. Many of the estimated effects in Table 1 are impressive. Attendance at some charter schools leads to large test score effects of more than half a standard deviation after two years of attendance. Most educational interventions such as class size reductions, teacher or student incentives, more resources, or extended time, generate gains that are less than one-quarter of this amount (Fryer 2016). However, while the large impacts from attending No Excuses schools like KIPP, UP Academy, and the Promise Academy are encouraging, some of the other charters generate no effect or even negative effects. Overall, the per-year average effect of attending a charter school in our sample of 113 schools is 0.080 standard devia- tions in math and 0.046 standard deviations in English/language arts. Our real interest from these papers, however, is not whether charter schools are effective on average, but rather what makes an effective charter school. Therefore, we dig a little deeper. What Can We Learn from Charter School Lotteries? 65

School-Specific Effects

The main focus of the studies of charter schools that use lottery-based evidence is usually to compare a group of charter schools to a group of alternatives. However, we want to look at how school-level characteristics of charter schools may influence the results—in particular, whether the estimated effects of charter schools are larger in poor- performing urban neighborhoods—and at the effects on certain subsets of students: blacks, Hispanics, students who were performing poorly in the past, and students who didn’t apply but would have gotten in had they applied. We also look at some of the esti- mated effects of charter schools on nontest outcomes. We will refer to some individual studies from Table 1 that do this, and in addition, we combine school-based data from several of these studies (indicated in boldface type) to gain insight and statistical power.4

Larger Effects in Poor-Performing Urban Neighborhoods We estimate charter school impacts relative to the experience of students who lose the lottery at that charter school. A charter school that attracts students who would have otherwise attended a particularly poor-performing traditional public school would appear more effective than an identical charter school that draws students who would have otherwise attended a better performing school (due to declines or less growth at the fallback school).5 As mentioned earlier, Angrist et al. (2013) find stark differences in the posi- tive effects that can be attributed to a charter school according to whether the school is located in an urban or nonurban setting.6 The large positive gains from

4 The online Appendix provides details of the data used in the rest of the analysis. Online Appendix Table 1 lists only the eight studies from Table 1 in boldface type that are included in our quantitative analyses; these studies cover 113 schools in total. Some studies in Table 1 were excluded from our school- based analysis because they were superseded by another paper: for example, the Boston schools are included in the Massachusetts study, and the 2009 study of New York City schools was replaced by a more recent 2013 study. Also, the results of Hoxby and Rockoff (2004) could not be converted to standard deviations and Hastings et al. (2012) could not be converted to per-year second stage effects. Online Appendix Figures 1A and 1B are histograms showing the wide range of estimated standardized effects on math or English/language arts tests from a year of attending these schools, which show an average mean effect that is positive but imprecisely estimated and with large standard errors: the average per-year math test score effect is 0.080 and its standard deviation is 0.23; the average English/language arts effect is 0.046 with a standard deviation of 0.21. Online Appendix Figures 2A and 2B plot the math and English/ language arts effect sizes against their corresponding standard errors to show that large point estimates are often accompanied by large standard errors. Online Appendix Figure 3 shows that if we focus on charter schools where the standard errors are estimated with some precision—less than or equal to 0.1 standard deviations—the charter school effect on math and English/language arts scores show a positive correlation of 0.64. The high correlation implies schools good at improving one subject are often good at improving others, and that these estimates have good signal-to-noise ratios. Online Appendix Table 2 shows how school characteristic variables are defined for the studies included in our regression analyses. 5 Hastings, Nielson, and Zimmerman (2012) examine whether winning a school choice lottery impacts students’ academic achievement even before they enroll in their chosen schools by raising their intrinsic motivation. They find that charter and magnet school lottery winners in an anonymous urban school district had truancy rates that were 7 percent lower than lottery losers in the period after the lottery was held but before winners enrolled in their new schools. 6 Angrist, Pathak, and Walters (2013) define urban schools as those located in areas where the district superintendent participates in the Massachusetts Urban Superintendents Network. This includes Boston, as well as smaller districts such as Cambridge, Holyoke, Lawrence, and Worcester. In Massachusetts, urban 66 Journal of Economic Perspectives

Figure 1 School-Level Charter School Effects by Scores of Fallback Schools

A: Massachusetts

Math English/Language Arts (ELA) 1.0 1.0

0.5 0.5

0 0 ELA effect Math effect −0.5 −0.5 = −0.629 = −0.373 β β −1.0 SE = 0.086 −1.0 SE = 0.063 −0.6 −0.4 −0.2 0 0.2 0.4 −0.6 −0.4 −0.2 0 0.2 0.4 Average score of fallback school Average score of fallback school

B: National study

Math English/Language Arts (ELA) 1.0 1.0

0.5 0.5

0 0 ELA effect Math effect −0.5 −0.5 = −0.227 = −0.100 β β SE = 0.073 SE = 0.078 −1.0 −1.0 −1.0 −0.5 0 0.5 1.0 1.5 −1.0 −0.5 0 0.5 1.0 1.5 Average score of fallback school Average score of fallback school

Urban Nonurban Fitted values

Notes: This graph shows school-level lottery-based charter school effects, where the effects are per-year school-level second stage point estimates, plotted against the average scores of fallback schools attended by noncharter students that applied to the charter school. The size of the point is weighted by the inverse of the standard error (larger points are more precise estimates). The following studies are included in this figure: the national study (Gleason et al. 2010) and Massachusetts (Angrist et al. 2013). See online Appendix Table 1 for details on these studies and for notes on modifications of published point estimates which put estimates on the same scale. See online Appendix Table 2 for description of the calculation of the fallback school scores. the Massachusetts studies are concentrated among urban charter schools, while nonurban charters are generally ineffective and some may even make students worse off than if they had lost the lottery. We show this pattern in the top two panels of Figure 1, which plots the Massachusetts estimates by average achievement levels at the fallback schools for lottery losers. The fallback school achievement level is measured as the average test score at the noncharter school that lottery losers attend the following year, weighted by the number of students that attend. Students at urban schools that lottery losers attend score well below average in test scores, while

charter schools are almost uniformly located in areas with high poverty rates and high minority enroll- ment. We follow the definitions of variables as defined in their original studies. See Online Appendix Table 2 for a full list of variable definitions across studies. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 67

the students at fallback nonurban schools generally score above average. The solid circles indicate effects from attendance at urban charter schools, which are almost all uniformly positive. Larger circles indicate more precise estimates (that is, smaller standard errors). The average urban charter school math effect is 0.25 (s.e. = 0.044). The open circles that indicate nonurban effects are mostly close to zero or even negative. The average math impact at nonurban charters is –0.07 (s.e. = 0.092).7 The top left graph in Figure 1 shows that when regressing the charter school effect in math on averages scores at the fallback schools, we get a strong negative relationship (–0.629, s.e.= 0.086). The R2 is more than half (0.513). An indicator for whether a school is in an urban area has no additional explanatory power.8 The top right graph of Figure 1 shows a qualitatively similar pattern, although less extreme, for charter school English/language arts impacts by test scores at the fallback insti- tutions. Clearly, the most impressive charter school effects are found where fallback schools have the least impressive academic performance. The national charter school study by Gleason et al. (2010) also displays a notice- able negative relationship between charter school effects and conditions at fallback schools. In this case, we use their dummy variable indicating “Large City” to define urban versus nonurban areas. For the performance level of fallback schools, we use the standardized average proficiency rate of the traditional public schools attended by lottery applicants in the year and grade level after losing a charter lottery (which is not on the same scale as the Massachusetts variable). The bottom left graph of Figure 1 shows that the slope from regressing charter math impacts on performance levels at fallback schools is also negative in this data (–0.227, s.e = .073). Again, the slope remains essentially the same when adding the urban dummy (–0.191, s.e.= 0.088). The slope for English/language arts test score impacts regressed on fallback school performance is also negative, but less steep and not significant. The importance of the fallback school to the size of the effect of enrolling in a charter school can also been seen in the top two graphs of Figure 2, which trace the accumulation of charter school effects over time for applicants to urban and nonurban middle schools in Massachusetts. We calculate percent proficient9 on the state standardized exam for urban charter attendees who were offered a seat in the lottery (solid, dark line) and noncharter attendees who were not offered a seat in the lottery (solid, lighter line) at each grade level, with similar calculations for the nonurban charter applicants (dashed lines), using the methods from Abadie (2002, 2003) as described in Angrist, Cohodes, Dynarski, Pathak, and Walters (2016). In both subjects in middle school, applicants to urban and nonurban charters have

7 Results are very similar, though less precise, when we use control complier test scores rather than the average school outcome of lottery losers to measure scores at fallback schools. Additionally, to address the concern that these findings reflect a mechanical correlation due to the presence of lottery losers’ outcomes in the fallback school scores, we recalculate the fallback school scores using the prior year’s scores (which the lottery losers do not contribute to). The findings are essentially identical, likely due to the relatively small proportion of lottery losers in any given school. 8 Specifically, the slope and R2 remain about the same when adding a dummy variable for the school being in an urban area (–0.658, s.e. = 0.375). 9 We use percent proficient as opposed to mean scores, so we are making comparisons to a set standard rather than the state mean. However, mean scores show a very similar pattern. 68 Journal of Economic Perspectives

Figure 2 Middle School Urban and Nonurban Charter School Effects over Time

A: Middle School Math English/Language Arts (ELA) 100 100

80 80

60 60

40 40

20 20 Percent pro cient or above Percent pro cient or above

Baseline Baseline 5th grade 6th grade 7th grade 8th grade 5th grade 6th grade 7th grade 8th grade B: High School Math English/Language Arts (ELA) 100 100

80 80

60 60

40 40

Percent pro cient or above 20 Percent pro cient or above 20

Baseline Baseline 10th grade 10th grade

Urban charter Urban noncharter Nonurban charter Nonurban noncharter

Notes: This graph shows charter school effects for urban (solid lines) and nonurban charters (dashed lines) in Massachusetts over time. The darker line in each pair shows mean scores for charter school attendees who were offered a seat in the lottery (compliers) over time and the light line shows mean scores for noncharter attendees who were not offered a seat in the lottery (compliers). Scores for compliers were calculated using the methods from Abadie (2002, 2003). The gap between the lines is the second stage charter school effect at that grade level, using a dummy variable endogenous variable for charter school attendance. Percent proficient or above is the percentage of students who score at least 240 or higher on the scaled score of their state administered standardized test (MCAS). very low proficiency rates at baseline. Then the proficiency rates diverge, with lottery winners that attend urban charter schools increasing their qualifications over time substantially, from about 30 to 70 percent. In math, lottery losers that attend urban noncharter schools actually have proficiency rates lower than their baseline rate by 8th grade. In nonurban schools (dashed lines), the opposite is true: noncharter schools (the light dashed lines) improve over time, and charter schools (the dark dashed line) do worse. The figure also shows that, by 8th grade, the proficiency of urban charter school attendees is in the range of children in the suburbs. The pattern for urban high schools shown in the bottom panels of Figure 2, also indi- cates a large proficiency gap of about 20 percentage points that opens up between charter and noncharter schools after two years for both math and English. What Can We Learn from Charter School Lotteries? 69

Larger Effects for Black, Hispanic, and Previously Poor-Performing Students The urban charter school advantage is fairly consistent across subgroups. Table 2 reports per-year local average charter school treatment effects, but for different subgroups of students after combining data from the Massachusetts (Angrist, Pathak, and Walters 2013) and national (Gleason et al. 2010) charter school studies. The dependent variables for the four columns are math and English/language arts test scores separated for urban and nonurban charter schools. For urban charter schools, the coefficients reveal positive and statistically signif- icant effects across each of the subgroups we examine, with the exception of white students, for whom the charter school effect is positive and marginally significant in math and essentially zero in English/language arts. Effects are generally larger for less-advantaged students, including black and Hispanic students, those with low baseline scores, those who receive subsidized lunch, and English language learners. Special education and non-special-education students in urban charters have essen- tially the same test score impact estimates (for more details and updated impacts on English language learners and special education students, including effects on classification, see Setren 2015). For nonurban charter schools, we find negative and statistically significant effects for female students, white students, and those without low baseline test scores, who do not receive subsidized lunch, who are not in special education, or who are not English language learners. There are marginally positive effects in math in nonurban schools for black students and those with low baseline scores.10

Similar Estimated Effects for Students Who Do Not Apply Using lottery outcomes to estimate charter school effects provides a useful esti- mate of the advantage from charter schools for those who students who applied to oversubscribed charter schools. However, the lottery studies cannot clearly tell us adopting approaches practiced by the most successful oversubscribed charters would help the type of students who don’t apply to charter schools. For example, charter schools often try to engage parents in their child’s learning; if students who do not apply to charter schools have less involved parents, these types of parental engagement strategies may not work for these students. In fact, there are a few studies suggesting that charters also benefit those who end up in them without applying. Abdulkadiroglu et al. (2016) examine charter take- overs in New Orleans and Boston, where chronically poor-performing schools were replaced with charters, most of which follow the No Excuses pedagogy. By comparing students at schools not yet taken over with students at schools that were taken over and turned into charter schools and excluding attendance at other charters, the authors estimate charter school effects for students who passively enroll. They calculate estimates of charter school impacts at New Orleans takeover

10 See the appendix to Chabrier, Cohodes, and Oreopoulous (2016), the NBER Working Paper version of our paper, for results for subgroups by each individual study, as well as other results by individual study. 70 Journal of Economic Perspectives

Table 2 Per-Year Lottery Estimated Charter School Attendance Effects for Subgroups

Urban Nonurban English/ English/ Math Language Arts Math Language Arts (1) (2) (3) (4) Male 0.228*** 0.122*** −0.039 −0.046 (0.046) (0.043) (0.049) (0.042) N 8,310 8,180 4,020 4,050 Female 0.299*** 0.117*** −0.126*** −0.097** (0.045) (0.040) (0.045) (0.039) N 8,800 8,690 4,230 4,260 Black/Hispanic 0.337*** 0.126*** 0.107* 0.003 (0.046) (0.042) (0.062) (0.055) N 9,460 9,220 1,140 1,150 White 0.098* −0.005 −0.128*** −0.097*** (0.059) (0.051) (0.036) (0.032) N 3,830 3,790 7,130 7,190 Low Baseline Score 0.289*** 0.123** 0.003 0.022 (0.051) (0.050) (0.052) (0.051) N 4,370 4,380 2,030 2,090 Not Low Baseline Score 0.250*** 0.100*** −0.180*** −0.130*** (0.034) (0.030) (0.034) (0.029) N 12,200 11,730 5,780 6,080 Subsidized Lunch 0.315*** 0.156*** 0.126* 0.075 (0.039) (0.035) (0.066) (0.062) N 11,650 11,500 1,320 1,340 Not Subsidized Lunch 0.171*** 0.042 −0.130*** −0.107*** (0.057) (0.051) (0.037) (0.032) N 5,460 5,370 6,930 6,970 Special Education 0.246*** 0.117 0.025 −0.117 (0.073) (0.074) (0.095) (0.093) N 3,120 3,090 1,310 1,330 Not Special Education 0.277*** 0.123*** −0.108*** −0.074** (0.036) (0.032) (0.035) (0.030) N 13,990 13,790 6,940 6,990 English Language Learner 0.382*** 0.204** 0.166 −0.123 (0.088) (0.090) (0.168) (0.142) N 1,400 1,390 240 250 Not English Language Learner 0.253*** 0.101*** −0.105*** −0.081*** (0.035) (0.032) (0.033) (0.029) N 15,710 15,480 8,000 8,070

Notes: This table shows per-year two-stage least squares estimates of charter school impacts for various subgroups, by urban and nonurban schools. Standard errors are clustered by student and school by grade and by year. The following studies are included in this figure: the national study (Gleason et al. 2010) and Massachusetts (Angrist et al. 2013). Individual study results are estimated with the microdata. Since data security restrictions preclude combining the microdata from these two studies, the combined estimates are the inverse variance weighted average. Sample sizes are rounded to the nearest 10. ***, **, and * indicate significance at the 1, 5, and 10 percent levels, respectively. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 71

charters of 0.36 standard deviations in math and 0.15 standard deviations in English/language arts per year of takeover charter school attendance. These estimates are similar to or larger than lottery estimates for the sample of Massachusetts urban charters schools in Angrist et al. (2013). At UP Academy Boston, Abdulkadiroglu et al. (2016) find that students who passively enroll in UP due to being grandfathered into the school have even larger English/language arts test scores impacts than students who attend due to winning an admissions lottery. Students who have been grandfa- thered have baseline English/language arts achievement 0.24 standard deviations below that of their lottery counterparts; attendance at UP effectively closes this gap. Indeed, evidence from the lottery studies suggests that charter schools may actually be more effective at increasing the achievement of students who are less likely to apply. In Massachusetts prior to 2011, charter applicants were slightly less likely to participate in special education programs or to qualify for a subsidized lunch and had slightly higher test scores at baseline, compared to their traditional public school counterparts (Angrist et al. 2013). However, these subgroups tend to have a larger increase in test scores relative to the counterfactual. In their study of KIPP Lynn, Angrist et al. (2012) find that students with special needs or those who have limited English proficiency experience larger positive effects in reading (0.42 and 0.27 standard deviations for students with special needs and with limited English proficiency, respectively, compared to an average of 0.12 standard deviations) and math (0.47 and 0.42 standard deviations, respectively, compared to an average of 0.35 standard deviations) for each year of attendance. They also find that the effects of attendance at KIPP Lynn are larger for students with lower baseline scores. In Boston, Walters (2014) finds that high-achieving students from higher-income fami- lies are more likely to apply to charter schools, but charter schools generate larger positive effects for disadvantaged, low-achieving, and nonwhite applicants. These results are promising because they suggest these charter schools may be good at helping the most disadvantaged among the group of disadvantaged students. Evidence is mixed as to whether charter schools for which lottery estimates are not available—either because the schools are not oversubscribed or because lottery records are not available—are more or less effective than the charter schools included in lottery-based studies. Angrist, Pathak, and Walters (2011) find that, for Massachu- setts, urban charter middle and high schools, observational estimates, calculated using a combination of matching and regression, and lottery-based estimates are very similar. However, for nonurban charter middle schools, the observational and lottery-based estimates are not as close, with the observational estimates seeming to overstate the effect of charter schools. Using observational estimates, they find that for urban charter schools, positive effects are larger in the lottery sample, relative to the set of schools that are undersubscribed or have poorly documented lotteries. Following Abdulkadiro˘glu et al. (2011), Dobbie and Fryer (2013) also find that the observational estimates for the lottery sample are somewhat higher than for the full sample of New York City charter schools, but the difference is quite small. In their study of KIPP middle schools, Clark Tuttle et al. (2013) find that matching-based estimates for the 10 schools in their lottery sample are similar to the matching-based estimates for all 41 study schools. 72 Journal of Economic Perspectives

Effects on Non-Test-Score Outcomes Most of the available research focuses on how charter school attendance affects scores on state-mandated tests, but some studies look at subsequent educational attainment and other outcomes likely linked to adult well-being (for example, Oreopoulos and Salvanes 2011). Angrist et al. (2016) find that charter atten- dance increases pass rates on the state high school graduation exam (which also qualifies students for state-sponsored college scholarships), as well as increasing SAT scores, advanced placement exam test taking, and advanced placement scores. While charter school attendance does not result in a statistically signifi- cant increase in overall college enrollment, it shifts enrollment from two-year to four-year colleges: charter school attendance decreases immediate enrollment in a two-year college by 11 percentage points and increases immediate enrollment in a four-year college by 17 percentage points. Dobbie and Fryer (2015) collect longer-term survey and administrative data for the earliest cohorts of the Promise Academy middle school. Six years after the admissions lottery, the authors estimate a 0.075 standard deviation increase in math achievement among youth offered admission to Promise Academy, higher college enrollment immediately following high school graduation, higher rates of imme- diate enrollment in a four-year college, a 10.1 percentage point drop in female pregnancy, and a 4.4 percentage point drop in male incarceration. Together, these findings suggest that charter schools with large impacts on test scores can also change educational attainment and wellness outcomes. Charter schools without positive test score impacts may well influence other outcomes—however, there is no lottery-based evidence for longer-term outcomes for these types of charters, though Sass, Zimmer, Gill, and Booker (2016) find positive charter effects on earnings for charter schools in Florida that have few test score gains through a matching and instrumental variables strategy.

Why Are Some Charter Schools Effective But Not Others?

No Excuses Studies Lottery studies that use admissions data from identifiable schools, like KIPP Lynn, UP Academy, SEED, and the Promise Academy charter schools, allow for a more in-depth analysis of the mechanisms behind the greater effectiveness of some types of charter schools. All four of these charters boost student performance substantially (especially in math) compared to the low-performing urban schools that lottery losers attend. Because each of these charter schools targets disadvan- taged areas, they also have a competitive advantage against surrounding traditional public schools. Because these charters are all trying to turn around the prospects of youth from disadvantaged neighborhoods, it is perhaps not surprising that they have adopted similar No Excuses strategies, which have been cited for decades by qualitative researchers as important for improving student performance (Dobbie and Fryer 2013). As noted earlier, these strategies include uniforms, high expecta- tions from principals and teachers, a tightly enforced discipline code, along with What Can We Learn from Charter School Lotteries? 73

intensive tutoring, longer instruction time, regular feedback, college preparation services, and an energetic commitment to ensuring the academic success of all students. Another feature of these schools are empowered, flexible, and inspiring principals, whose presence may be necessary to implement No Excuses schools successfully (Carter 2000). There is some question about the extent to which the No Excuses framework captures what is different about these schools. While these schools share many simi- larities, they also exhibit distinct differences in curricula and culture—for example, KIPP schools follow a particularly unique setup, with middle schools starting in Grade 5 instead of 6, students receiving “paychecks” for exhibiting good behavior that can be used for participation in school activities, and classrooms requiring students to SLANT (that is, Sit up straight, Listen, Ask questions, Nod, and Track the person speaking with your eyes). At HCZ’s Promise Academy, students receive a free daily breakfast and regular instruction on character and social/emotional issues in gender-based groups, and all classrooms are equipped with smart boards. Suspension rates also differ. UP Academy and SEED report relatively high suspen- sion rates (33.5 percent in 2013 for UP compared to a 2.8 percent state average, and 52 percent for SEED compared to a 23 percent city average), while KIPP Lynn and HCZ’s Promise Academy report low suspension rates that are close to state averages (4.7 and 2.5 percent, respectively).11 Moreover, some evidence suggests that these four charter schools may spend more per student than the traditional public schools, because they receive additional funding from charitable foundations. KIPP, for example, reports that 15 percent of its annual operation expenses are covered by philanthropic contributions.12 The extent to which these revenues are pursued due to less per-student funding from public sources remains a source of debate. KIPP schools, at least in general, appear to spend significantly more per student compared to traditional schools (Miron, Urschel, and Saxton 2011; Baker, Libby, and Wiley 2012), though this pattern is not observed in Boston charter schools (Angrist et al. 2016).

11 For UP Academy: “2015 Massachusetts School Report Card Overview: UP Academy Charter School of Boston,” Massachusetts Department of Elementary and Secondary Education, accessed January 21, 2016, http://profiles.doe.mass.edu/reportcard/SchoolReportCardOverview.aspx?linkid=105&orgco de=04800405&fycode=2015&orgtypecode=6&. For SEED: “SEED PCS of Washington, DC: 2014-2015 Equity Report,” District of Columbia, accessed January 21, 2016, http://learndc.org/schoolprofiles/ view?s=0174#equityreport. For KIPP Lynn: “2015 Massachusetts School Report Card Overview: KIPP Academy Lynn Charter School,” accessed January 21, 2016, http://profiles.doe.mass.edu/reportcard/ SchoolReportCardOverview.aspx?linkid=105&orgcode=04290010&fycode=2015&orgtypecode=6&. For Promise Academy: “Charter School Suspension Rates: Way Above District Averages,” United Federation of Teachers, accessed January 21, 2016, http://www.uft.org/files/charter-school-suspension-rates-way- above-most-district-averages. Note that, according to the UFT report, suspension rates for KIPP schools in New York City vary widely, from 0 percent (KIPP NYC Washington Heights Academy Charter School) to 23 percent (KIPP AMP). 12 For details, see KIPP, “Frequently Asked Questions,” http://www.kipp.org/faq; Goldman Sachs, “Supporting the Harlem Children’s Zone,” http://www.goldmansachs.com/citizenship/goldman- sachs-gives/building-and-stabilizing-communities/hcz/; The Giving Common, “UP Education Network (Unlocking Potential Inc),”, https://www.givingcommon.org/profile/1108725/up-education-network- unlocking-potential-inc/; and The SEED Foundation, “FAQs,” http://www.seedfoundation.com/index. php/about-seed/faqs. All four websites accessed January 21, 2016. 74 Journal of Economic Perspectives

Extensive research would be needed to document and appreciate the detailed differences across these schools (for an example, see Merseth, Cooper, Roberts, Tieken, Valant, and Wynne 2009). However, the similarity in effectiveness of these charter schools suggests that it is their common set of No Excuses characteristics that matter most in boosting performance. One exception might be the higher reading score effects for SEED Academy. Curto and Fryer (2014) suggest that this may be due to the fact that SEED is a boarding school.

What Relationships Exist between Charter School Characteristics and Effectiveness? We combine data from three studies (Massachusetts, New York City, and the national study) for which school-specific charter effects and school characteristics are available in order to explore the relationship between school characteristics and effec- tiveness. We use both the school-specific effects and school characteristics variable definitions from Dobbie and Fryer’s (2013) New York City study. Their school char- acteristics include five “nontraditional” inputs that are measured on a binary basis: teacher feedback, data-driven instruction, instructional time, high-dosage tutoring, and high expectations, as well as a standardized index of the five characteristics. They also include four traditional inputs: class size, per pupil expenditures, highly qualified teachers (as measured by masters degrees), and teacher certification and an index that combines these as well. We create equivalent variables for schools in the Massachu- setts study (Angrist et al. 2013) and the national study (Gleason et al. 2010). For these two studies, our method for creating dummy variables equivalent to those in the New York City study is to estimate the median of a school characteristic—for example, per pupil expenditure—and assign values of one for schools that were above the median and zero for schools that were below. We are able to create fairly similar measures in the Massachusetts study, but had fewer similar input variables in the national study.13 When we combine the three studies (Massachusetts, New York City, and the national study), our sample size is large enough to use lottery-based rather than observational estimates as our outcome of interest, whereas Dobbie and Fryer (2013) had to use observational estimates and Angrist, Pathak, and Walters (2013) use both observa- tional and lottery estimates but have less precision than we do. In Table 3, we present results from regressing the estimated charter school effects from the studies themselves on their corresponding school characteristics as defined above, which include both traditional and nontraditional inputs. All regressions include study fixed effects and a control for school level (elementary, middle, high) and are weighted by the inverse of the outcome’s standard error. We also cluster standard errors by school to account for the fact that a handful of the charter schools in this sample have campuses serving multiple school levels. Columns 1 and 5 include results from single variable regressions, while all other

13 In online Appendix Table 2, we describe in detail the variables and our adaptations across the under- lying studies. See Chabrier, Cohodes, and Oreopoulos (2016) for the individual study results, which tend to be similar though less precisely estimated. Of the three studies whose data we combine, the most dissimilar study is the national study (Gleason et al. 2010), where the available survey variables do not map well to the constructs from the New York City study. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 75

Table 3 Correlation between Lottery-Based Charter School Math Effects and Key Variables from Dobbie and Fryer (2013)

Math English/Language Arts Single Single variable variable Multivariable regressions Multivariable regressions regression regression (1) (2) (3) (4) (5) (6) (7) (8)

Teacher Feedback 0.140** 0.104** 0.050 0.023 (0.062) (0.047) (0.047) (0.048) N 86 86 Differentiated Instruction 0.093 0.055 0.106** 0.081* (Data Driven) (0.072) (0.055) (0.049) (0.046) N 82 82 Instructional Time 0.146*** 0.071 0.078** 0.027 (0.051) (0.049) (0.038) (0.041) N 86 86 High Quality Tutoring 0.260*** 0.153** 0.136*** 0.073 (0.064) (0.069) (0.050) (0.056) N 86 86 High Expectations 0.145** 0.080* 0.100** 0.072* (0.057) (0.047) (0.042) (0.042) N 86 86 Index of Practice Inputs 0.109*** 0.142*** 0.110*** 0.064*** 0.067*** 0.064*** (0.026) (0.027) (0.027) (0.020) (0.023) (0.020) N 87 87 Class Size 0.015 0.063 –0.079* –0.053 (0.066) (0.045) (0.047) (0.037) N 85 85 Per-Pupil Expenditures 0.089 –0.015 0.086** 0.030 (0.055) (0.054) (0.041) (0.045) N 81 81 Teachers with Masters 0.039 0.126*** 0.049 0.088*** (0.062) (0.040) (0.043) (0.034) N 84 84 Teachers with –0.020 0.034 –0.034 –0.012 Certification (0.061) (0.044) (0.043) (0.037) N 85 85 Index of Resource Inputs 0.021 0.028 0.023 0.000 0.007 0.002 (0.041) (0.026) (0.028) (0.025) (0.019) (0.019) N 87 81 78 87 87 81 78 87

Notes: This table shows estimates from regressions of school characteristics on school-level charter school effect estimates using data from the National Study (Gleason et al. 2010), Massachusetts (Angrist et al. 2013), and New York City (Dobbie and Fryer 2013). Columns (1) and (5) show results from single variable regressions; each coefficient comes from its own regression. Columns (2)–(4) and (6)–(8) show results from multivariate regressions, with the school characteristics included as indicated. Regressions are weighted by the inverse of the school-level standard error. Regressions include dummies for school levels (elementary, middle) as well as study fixed effects, and standard errors are clustered by the school level to account for schools with campuses at multiple grade levels. See online Appendix Table 1 for details on these studies and for notes on modifications of published point estimates which put estimates on the same scale. See online Appendix Table 2 for variable definitions across studies. ***, **, and * indicates significance at the 1, 5, and 10 percent levels, respectively. columns include multiple school characteristics. We also present results using an index of school practices, equal to a standardized sum of each school practice char- acteristic employed, as well as an index for school resource inputs summarized by a second standardized index. 76 Journal of Economic Perspectives

When each characteristic is considered separately, in both math (in column 1) and English/language arts (in column 5), all of the school practice inputs but one are positive and statistically significant (excluding data-driven instruction for math and teacher feedback for English/language arts). The coefficient on the index summarizing the practice inputs, which correspond to No Excuses–style practices, is positive and precise. In math, none of the school resource variables have predic- tive power for charter school effects. In English/language arts, there appears to be a positive association between per pupil expenditures and school level impacts, and the coefficient on class size is significant but in the “wrong” direction. For both subjects, the summary index of resource inputs in columns 1 and 5 has no explana- tory power. The other columns include multiple characteristics and generally show that school practices remain important even when controlling for resource inputs. These findings are consistent with the results from Angrist, Pathak, and Walters (2013) and Dobbie and Fryer (2013).

Taking Location into Account We pointed out earlier that the charters with the highest value-added locate in areas where lottery losers end up in some of the worst performing schools; conversely, charter schools with the lowest value-added are in more suburban areas, where neighboring traditional public schools do relatively well. Also, charter schools that are more likely to locate in highly segregated and disadvantaged areas tend to be No Excuses schools, while nonurban charter schools, in contrast, tend to empha- size other priorities, such as performing arts, interdisciplinary group projects, field work, or customized instruction. In Massachusetts, for example, no charter schools in nonurban areas identify with a No Excuses philosophy, while two-thirds of charter schools in urban areas identify as No Excuses (Angrist et al. 2013). Thus, we condition on test performance at fallback schools to explore whether the remaining variance in estimated charter school effectiveness still relates to No Excuses practices.14 We drop data for New York City (Dobbie and Fryer 2013), for which we have no information about fallback school performance, leaving us with a sample of 57 schools from the Massachusetts and national studies. In column 1 of Table 4 we regress charter school effect estimates on a dummy variable for whether the charter is located in an urban area, while also including study fixed effects and school level dummies, again weighted by the inverse of the school effect standard error. Urban charters increase annual math scores by 0.28 standard deviations more than nonurban charters per year of attendance, on average. In bivariate relation- ships shown in columns 2–4, we see that test scores in the fallback school as well as school practice inputs also have explanatory power for charter school impacts. Beginning in column 5, we combine the additional variables with the urban indicator. When we include average test performance at fallback schools as a

14 Several others have also pointed out the importance of the fallback, or counterfactual, option in esti- mating program effects. See, for example, Heckman, Hohmann, Smith, and Khoo (2000) for evidence from job training, Kirkebøen, Leuven, and Mogstad (2014) for evidence from post-secondary decisions, and Kline and Walters (2015) for evidence on Head Start. What Can We Learn from Charter School Lotteries? 77

Table 4 Correlation between Lottery-Based Charter School Effects and Urban, Scores in Fallback Schools, and School Inputs

(1) (2) (3) (4) (5) (6) (7)

Panel A: Math Urban 0.280*** 0.170* 0.113 0.111 (0.076) (0.088) (0.116) (0.121) Scores in the Fallback –0.327*** –0.238*** –0.197** –0.197** Schools (0.076) (0.090) (0.080) (0.080) Index of Practice 0.131*** 0.064 0.065 Inputs (0.032) (0.045) (0.047) Index of Resource 0.015 0.008 Inputs (0.047) (0.030) N 58 57 58 58 57 57 57 R2 0.272 0.299 0.283 0.076 0.357 0.391 0.392

Panel B: English/Language Arts Urban 0.145*** 0.090 0.048 0.052 (0.054) (0.060) (0.070) (0.072) Scores in the Fallback –0.169** –0.120 –0.083 –0.084 Schools (0.068) (0.076) (0.080) (0.080) Index of Practice 0.077*** 0.048 0.047 Inputs (0.024) (0.033) (0.034) Index of Resource –0.007 –0.010 Inputs (0.028) (0.021) N 58 57 58 58 57 57 57 R2 0.147 0.154 0.187 0.052 0.183 0.217 0.220

Notes: This table shows estimates from regressions of school characteristics on school-level charter school effect estimates. Columns (1) and (5) show results from single variable regressions; each coefficient comes from its own regression. Columns (2)–(4) and (6)–(8) show results from multivariate regressions, with the school characteristics included as indicated. Regressions are weighted by the inverse of the school-level standard error. Regressions include dummies for school levels (elementary, middle) as well as study fixed effects, and standard errors are clustered by the school level to account for schools with campuses at multiple grade levels. The following studies are included in this figure: The national study (Gleason et al. 2010) and Massachusetts (Angrist et al. 2013). See online Appendix Table 1 for details on these studies and for notes on modifications of published point estimates which put estimates on the same scale. See online Appendix Table 2 for variable definitions across studies. *** Significant at the 1 percent level. ** Significant at the 5 percent level. * Significant at the 10 percent level.

conditioning variable, along with an urban indicator and index variables for prac- tice and resource inputs, the variables for fallback school performance remain strongly significant in math. This finding is consistent with Figure 1, which shows a strong relationship between charter effects and test scores at fallback schools, even within urban areas. Recall that No Excuses characteristics, as proxied by the index of practice inputs, are strongly related to charter school effects. But when including controls for urban areas and fallback school performance, the coefficient on the 78 Journal of Economic Perspectives

index of practice inputs falls by about half (to 0.065) and becomes insignificant.15 A similar pattern holds for English/language arts scores: the coefficient on the school practice index falls by about half (from 0.077 to 0.047) and loses statistical signifi- cance. Of course, in both cases the loss of statistical significance could be due, in part, to an increase in the standard error. The estimated impact of more resource inputs remains negligible, with or without additional controls. We rerun these results in Table 5, this time breaking up the school practice index variable into specific charter school characteristics and adding high suspen- sion rates to the list of variables, again using the school characteristic definitions from Dobbie and Fryer (2013). When including urban and fallback school performance controls, the individual school characteristics that remain significantly correlated with charter school math effects are teacher feedback, intensive tutoring, and above average suspension rates (significant at the 10 percent level). These variables may serve as proxies for other underlying characteristics. Notably, the importance of the high expectations variable disappears once both urban status and fallback performance is taken into account. When all of the school characteristics variables are included together in column 7, the point estimates for the tutoring and high suspension rate variables remain about the same, while the others drop or remain negligible. Charter schools that offer intensive tutoring have math test scores 0.15 standard deviations higher, on average, for each year of charter attendance. This value is large and significant at the 10 percent level. After three years of attendance, students at these schools would have test scores almost half a standard deviation higher than lottery losers at fallback schools. Charter schools with high suspension rates have math test scores that are 0.12 standard deviations higher, on average, though this measure is not statistically significant. For English/language arts test outcomes, only differentiated instruction is significant when included with urban and fallback school performance controls, and none of the school characteristics variables are significant when they are included together in the same regression. Overall, once one accounts for surrounding neighborhood and school char- acteristics, many of the specific charter school practices are no longer associated with student improvement. The main exception is intensive tutoring. Its estimated impact remains large and relatively stable, especially in math, even when condi- tioning on other charter school characteristics. However, after conditioning on fallback school quality, it is nonurban schools that provide most of the variation in charter school effects used to identify the importance of tutoring. When the model in column 7 of Table 5 is estimated for the 21 urban schools only (conditioning on fallback quality), the coefficients for all school characteristics are statistically insignificant with large standard errors. The coefficients on school characteristics when using only the 28 nonurban schools are also insignificant except the one for intensive tutoring (0.254, with a standard error of 0.112).

15 In other specifications we tried—with an additional squared and cubic fallback school quality term, and without the urban dummy—the coefficient for the index of practice inputs also falls by about half. For the model without the urban dummy, the coefficient is significant. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 79

Table 5 Correlation between Lottery-Based Charter School Effects and Urban, Scores in Fallback Schools, and Detailed School Inputs

(1) (2) (3) (4) (5) (6) (7)

Panel A: Math Teacher Feedback 0.131** 0.066 (0.059) (0.064) Differentiated Instruction 0.066 0.046 (Data Driven) (0.070) (0.066) Instructional Time 0.072 –0.011 (0.071) (0.078) High-Quality Tutoring 0.185*** 0.153* (0.068) (0.091) High Expectations –0.021 –0.013 (0.079) (0.076) High Suspensions 0.144* 0.120 (0.083) (0.076) Urban 0.184** 0.114 0.104 0.097 0.181* 0.114 0.091 (0.088) (0.084) (0.089) (0.080) (0.105) (0.082) (0.112) Scores in the Fallback –0.220*** –0.272*** –0.240*** –0.223*** –0.242*** –0.250*** –0.204*** Schools (0.083) (0.086) (0.092) (0.080) (0.088) (0.086) (0.074) N 56 55 56 56 57 50 49 R2 0.411 0.403 0.401 0.460 0.358 0.469 0.546

Panel B: English/Language Arts Teacher Feedback 0.017 –0.063 (0.057) (0.071) Differentiated Instruction 0.124** 0.071 (Data Driven) (0.054) (0.064) Instructional Time 0.040 –0.009 (0.046) (0.064) High-Quality Tutoring 0.101 0.084 (0.067) (0.105) High Expectations 0.073 0.109 (0.071) (0.076) High Suspensions 0.095 0.111 (0.062) (0.077) Urban 0.092 0.025 0.052 0.047 0.055 0.047 –0.046 (0.063) (0.053) (0.053) (0.056) (0.069) (0.056) (0.062) Scores in the Fallback –0.117 –0.148** –0.124 –0.112 –0.099 –0.185*** –0.154* Schools (0.079) (0.068) (0.076) (0.078) (0.083) (0.070) (0.087) N 56 55 56 56 57 50 49 R2 0.182 0.250 0.198 0.226 0.199 0.284 0.371

Notes: This table shows estimates from regressions of school characteristics on school-level charter school effect estimates. Regressions are weighted by the inverse of the school-level standard error. Regressions include dummies for school levels (elementary, middle) as well as study fixed effects, and standard errors are clustered by the school level to account for schools with campuses at multiple grade levels. The following studies are included in this figure: the national study (Gleason et al. 2010) and Massachusetts (Angrist et al. 2013). See online Appendix Table 1 for details on these studies and for notes on modifications of published point estimates that put estimates on the same scale. See online Appendix Table 2 for variable definitions across studies. ***, **, and * indicate significance at the 1, 5, and 10 percent levels, respectively 80 Journal of Economic Perspectives

This evidence in support of tutoring is of course only suggestive, based on anal- ysis of correlations rather than on the randomized provision of tutoring services. However, the potential importance of intensive tutoring is in line with recent quasi-experimental and experimental studies that find large increases in student performance from tutoring, delivered either as part of a package of school reforms or on its own. Kraft (2015) uses two quasi-experimental methods to estimate the impact of implementing individualized tutoring classes four days a week at MATCH Charter Public High School in Boston and finds large and statistically significant impacts on English/language arts achievement. In his review of randomized experi- ments in education, Fryer (2016) distinguishes between low- and high-dosage tutoring, defining the latter as being tutored in groups of six or fewer for more than three days per week, or being tutored at a rate that would equate to 50 hours or more over a 36-week period. Consistent with our findings, Fryer finds that high- dosage tutoring programs have, on average, statistically significant positive treatment effects on math and reading achievement. In contrast, the meta-coefficient on low- dosage tutoring is not statistically significant for either subject. Some examples of recent randomized experiments showing gains from intensive tutoring include Lee, Morrow-Howell, Jonson-Reid, and McCrary (2010), who study the Experience Corps® (EC) program for placing older volunteers in elementary schools to tutor reading; Fryer (2014), who studied the use of intensive tutors in fourth, sixth, and ninth grades in Houston public schools; Markovitz, Hernandez, Hedberg, and Silberglitt (2014), who evaluated the Minnesota Reading Corps, a literacy tutoring program for kindergarten through third grade students; Cook et al. (2015), who studied an intensive tutoring serving male ninth and tenth graders in 12 public high schools in Chicago; and May et al. (2014), who evaluated an early-intervention literacy tutoring program called Reading Recovery.

Conclusions

Charter schools were originally intended to serve as research laboratories for learning about best practices in education. They have since become more viewed as competitive alternatives to traditional public schools. But with many charters now receiving more applications than spots available, the requirement that oversub- scribed charter schools admit students through lottery has unintentionally created the research setting that the charter school movement’s originators were seeking. Our purpose in this paper is not to enter the debate on whether charter schools should exist or expand: we have not discussed issues like how increased competition from charters affects traditional public schools over time, or the possible effects of charter schools on the racial/ethnic or socioeconomic mixture of students (for an example of discussion of this point in North Carolina, see Ladd et al. 2015). Instead, our purpose is to gather existing evidence from charter lotteries to learn more about the education production function. We confirm a finding from previous studies that a sharp divide exists between the effectiveness of charter schools in urban and nonurban settings. However, there What Can We Learn from Charter School Lotteries? 81

are two important differences between the urban and nonurban charters that have been studied. One is that almost all the charter school alternatives that have been the subject of lottery studies in disadvantaged urban areas use a No Excuses approach, while there are few No Excuses schools in nonurban settings. The other main differ- ence is that students who attend charter schools in disadvantaged urban areas are usually being compared to students who end up in very poor performing schools, while students in charter schools in nonurban areas are being compared to students who attend better performing schools. This pattern arises because the charter schools aiming to attract students from the worst performing traditional public schools often find them residing in highly segregated and disadvantaged urban neighborhoods. Many charter schools in disadvantaged urban schools have proven to be impres- sively effective, often raising average test score performance by more than half a standard deviation after just two years of attendance. For less-advantaged students, including black and Hispanic students, those with low baseline scores, and English language learners, impacts are similar or higher than impacts for the more-advan- taged. Other studies find corresponding improvements for longer-term outcomes, such as reductions in incarceration rates and teen pregnancies and increases in enrollment in four-year colleges (Dobbie and Fryer 2015; Angrist et al. 2016). It is unclear, however, if other types of charter schools would deliver similarly impres- sive results in areas with very poor-performing traditional schools, since there are currently not enough other types of charter schools in these areas to tell. It is also unclear if No Excuses schools would deliver similar results in nonurban areas; again, there are currently not enough of them to tell. For now, the kinds of charters that have been created in nonurban areas—such as those emphasizing performing arts, exploratory learning, or instruction tailored to different learning styles—may offer other benefits but do not appear to be improving standardized test scores. After accounting for the charter school effect variation explained by urban status and performance levels at fallback schools, we examined which charter school charac- teristics most strongly correlate with the little remaining variation. In line with previous studies, we find no evidence that differences in class size, per pupil expenditures, or teacher certification explain charter school effectiveness. The No Excuses explanatory factor that remains significant after controlling for fallback school performance (even for nonurban schools only) is whether a charter has an intensive tutoring program (though the effect of high suspension rates is close to significant). Of course, the tutoring variable could be a proxy for other school differences, and the relationships between effectiveness and several other associations are estimated imprecisely. But a push for intensive tutoring—more frequent and convenient than currently provided at traditional public schools, and in some cases mandatory—may serve as an impor- tant complement to instruction in many different kinds of classrooms. 82 Journal of Economic Perspectives

■ For sharing the data from and answering question about their studies, we thank Ira Nichols- Barrer, Brian Gill, Phil Gleason, Josh Furgeson, and Christina Clark Tuttle of Mathematica Policy Research; Danielle Eisenberg of the KIPP Foundation; Josh Angrist, Parag Pathak, and Chris Walters; and Will Dobbie and Roland Fryer. We also thank Carrie Conaway of the Massachusetts Department of Elementary and Secondary Education and the National Center for Education Statistics for additional data access, and Josh Angrist, Will Dobbie, and Mike Gilane, as well as seminar participants at Teachers College Columbia University and IFN- Stockholm for helpful comments.

References

Abadie, Alberto. 2002. “Bootstrap Tests for Management Organizations: Comparing Charter Distributional Treatment Effects in Instrumental School and Local Public District Financial Variables Models.” Journal of the American Statistical Resources.” Boulder, CO: National Education Association 97(457): 284–92. Policy Center. http://nepc.colorado.edu/publica- Abadie, Alberto. 2003. “Semiparametric Instru- tion/spending-major-charter. mental Variable Estimation of Treatment Response Baude, Patrick L., Marcus Casey, Eric A. Models.” Journal of Econometrics 113(2): 231–63. Hanushek, and Steven G. Rivkin. 2014. “The Abdulkadiroglu, Atila, Joshua D. Angrist, Evolution of Charter School Quality.” http:// Susan M. Dynarski, Thomas J. Kane, and Parag harris.uchicago.edu/sites/default/files/Rivkin. A. Pathak. 2011. “Accountability and Flexibility in paper_.pdf. Public Schools: Evidence from Boston’s Charters Brown, Emma. 2014. “Joy and Anguish for Parents and Pilots.” Quarterly Journal of Economics 126(2): as D.C. Releases School Lottery Results.” Washington 669–748. Post, March 31. https://www.washingtonpost.com/ Abdulkadiroglu, Atila, Joshua D. Angrist, Peter local/education/joy-and-anguish-for-parents-as- D. Hull, and Parag A. Pathak. 2016. “Charters dc-releases-school-lottery-results/2014/03/31/ Without Lotteries: Testing Takeovers in New a04aea1e-b8ff-11e3-9a05-c739f29ccb08_story.html. Orleans and Boston.” American Economic Review Carter, Samuel Casey. 2000. No Excuses: Lessons 106(7): 1878–1920. from 21 High-Performing, High-Poverty Schools. Angrist, Joshua D., Sarah R. Cohodes, Susan Washington, DC: Heritage Foundation. Available M. Dynarski, Parag A. Pathak, and Christopher at http://eric.ed.gov/?id=ED440170. R. Walters. 2016. “Stand and Deliver: Effects Chabrier, Julia, Sarah Cohodes, and Philip of Boston’s Charter High Schools on College Oreopoulos. 2016. “What Can We Learn from Preparation, Entry, and Choice.” Journal of Labor Charter School Lotteries?” NBER Working Paper Economics 34(2). 22390. Angrist, Joshua D., Susan M. Dynarski, Thomas Chapman, Ben, and Stephen Rex Brown. 2014. J. Kane, Parag A. Pathak, and Christopher R. “Success Academy Charter Schools Admissions Walters. 2012. “Who Benefits from KIPP?” Journal Rate is Only 20%, Lower Than NYU.” New York of Policy Analysis and Management 31(4): 837–60. Daily News, April 4. Angrist, Joshua D., Parag A. Pathak, and Christo- Charter Schools in Perspective. No date. A joint pher R. Walters. 2011. “Explaining Charter School project of the Spencer Foundation and Public Effectiveness.” NBER Working Paper 17332. Agenda. Website: http://www.in-perspective.org/. Angrist, Joshua D., Parag A. Pathak, and Chetty, Raj, John N. Friedman, Nathaniel Christopher R. Walters. 2013. “Explaining Charter Hilger, , Diane Whitmore Schan- School Effectiveness.” American Economic Journal: zenbach, and Danny Yagan. 2011. “How Does Your Applied Economics 5(4): 1–27. Kindergarten Classroom Affect Your Earnings? Baker, Bruce D., Ken Libby, and Kathryn Evidence from Project Star.” Quarterly Journal of Wiley. 2012. “Spending by the Major Charter Economics 126(4): 1593–1660. Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos 83

Citizens League of Minnesota. 1998. “Chartered “The Medium-Term Impacts of High-Achieving Schools = Choices for Educators + Quality for All Charter Schools.” Journal of Political Economy Students.” http://citizensleague.org/wp-content/ 123(5): 985–1037. uploads/2013/05/424.Report.Chartered-Schools- Fiske, Edward B., and Helen F. Ladd. 2000. Choices-for-Education-Quality-for-All-Students.pdf. When Schools Compete: A Cautionary Tale. Wash- Clark Tuttle, Christina, Brian Gill, Philip ington, D.C.: Brooking Institute Press. Gleason, Virginia Knechtel, Ira Nichols-Barrer, Fryer, Roland G., Jr. 2014. “Injecting Charter and Alexandra Resch. 2013. KIPP Middle School Best Practices into Traditional Public Schools: Impacts on Achievement and Other Schools: Evidence from Field Experiments.” Quar- Outcomes. Washington, DC: Mathematica Policy terly Journal of Economics 129(3): 1355–1407. Research. http://www.mathematica-mpr.com/ Fryer, Roland G., Jr. 2016. “The Production of our-publications-and-findings/publications/ Human Capital in Developed Countries: Evidence -middle-schools-impacts-on-achievement- from 196 Randomized Field Experiments.” and-other-outcomes-full-report. Handbook of Field Experiments. Elsevier. https:// Clark Tuttle, Christina, Philip Gleason, and www.povertyactionlab.org/handbook-field- Melissa Clark. 2012. “Using Lotteries to Evaluate experiments. Schools of Choice: Evidence from a National Study Furgeson, Joshua, Brian Gill, Joshua Haimson, of Charter Schools.” Economics of Education Review Alexandra Killewald, Moira McCullough, Ira 31(2): 237–53. Nichols-Barrer, Bing-Ru Teh, Natalya Verbitsky- Clark Tuttle, Christina, Philip Gleason, Virginia Savitz, Melissa Bowen, Allison Demerrit, Paul Hill, Knechtel, Ira Nichols-Barrer, Kevin Booker, and Robin Lake. 2012. Charter School Management Gregory Chojnacki, Thomas Coen, and Lisbeth Organizations: Diverse Strategies and Diverse Student Goble. 2015. “Understanding the Effect of KIPP Impacts. Princeton, NJ: Mathematica Policy as It Scales: Vol. I: Impacts on Achievement and Research. http://www.mathematica-mpr.com/ Other Outcomes.” Washington, DC: Mathematica our-publications-and-findings/publications/ Policy Research. http://www.mathematica-mpr. charterschool-management-organizations-diverse- com/our-publications-and-findings/publications/ strategies-and-diverse-student-impacts. executive-summary-understanding-the-effect-of- Gleason, Philip, Melissa Clark, Christina Clark kipp-as-it-scales-volume-i-impacts-on-achievement. Tuttle, Emily Dwoyer, and Marsha Silverberg. Cohodes, Sarah R., Elizabeth M. Setren, 2010. The Evaluation of Charter School Impacts: Final Christopher R. Walters, Joshua D. Angrist, and Report. NCEE 2010-4029. Washington, DC: U.S. Parag A. Pathak. 2013. “Charter School Demand Department of Education, National Center for and Effectiveness: A Boston Update.” The Boston Education Evaluation and Regional Assistance, Foundation and New Schools Venture Fund. Institute of Education Sciences. http://ies.ed.gov/ Available at: http://seii.mit.edu/research/study/ ncee/pubs/20104029/. charter-school-demand-and-effectiveness-a-boston- Hastings, Justine S., Christopher A. Nielson, and update/. Seth D. Zimmerman. 2012. “The Effect of School Cook, Philip J., Kenneth Dodge, George Choice on Intrinsic Motivation and Academic Farkas, Roland G. Fryer, Jr., Jonathan Guryan, Outcomes.” NBER Working Paper 18324. Jens Ludwig, Susan Mayer, Harold Pollack, Heckman, James, Neil Hohmann, Jeffrey and Laurence Steinberg. 2015. “Not Too Late: Smith, and Michael Khoo. 2010. “Substitution and Improving Academic Outcomes for Disadvantaged Dropout Bias in Social Experiments: A Study of an Youth.” Institute for Policy Research Northwestern Influential Social Experiment.” Quarterly Journal of University Working Paper WP-15-01. Economics 115(2): 651–94. Curto, Vilsa E., and Roland G. Fryer, Jr. 2014. Hoxby, Caroline M., Sonali Murarka, and Jenny “The Potential of Urban Boarding Schools for Kang. 2009. “How New York City’s Charter Schools the Poor: Evidence from SEED.” Journal of Labor Affect Achievement.” Cambridge, MA: The New Economics 32(1): 65–93. York City Charter Schools Evaluation Project. Dobbie, Will, and Roland G. Fryer, Jr. 2011. http://users.nber.org/~schools/charterschool- “Are High-Quality Schools Enough to Increase seval/. Achievement Among the Poor? Evidence from Hoxby, Caroline M., and Jonah E. Rockoff. the Harlem Children’s Zone.” American Economic 2004. “The Impact of Charter Schools on Student Journal: Applied Economics 3(3): 158–87. Achievement.” https://www0.gsb.columbia.edu/ Dobbie, Will, and Roland G. Fryer, Jr. 2013. faculty/jrockoff/hoxbyrockoffcharters.pdf . “Getting beneath the Veil of Effective Schools: Junge, Ember Reichgott. 2014. “Charter School Evidence from New York City.” American Economic Path Paved with Choice, Compromise, Common Journal: Applied Economics 5(4): 28–60. Sense.” Phi Delta Kappan 95(5): 13–17. Dobbie, Will, and Roland G. Fryer, Jr. 2015. Kirkebøen, Lars, and Edwin Leuven, and 84 Journal of Economic Perspectives

Magne Mogstad. 2014. “Field of Study, Earnings, 2015c. State Laws on Weighted Lotteries and Enroll- and Self-Selection.” NBER Working Paper 20816. ment Practices. http://www.publiccharters.org/ Kline, Patrick, and Christopher Walters. 2015. publications/weighted-lotteries-paper/. “Evaluating Public Programs with Close Substi- Neil, Derek. 2009. “The Role of Private Schools tutes: The Case of Head Start.” NBER Working in Education Markets.” Chap. 26 in Handbook of Paper 21658. Research on School Choice, edited by Bark Berends, Kraft, Matthew A. 2015. “How to Make Addi- Matthew G. Springer, Dale Ballou, and Herbert J. tional Time Matter: Integrating Individualized Walberg. Lawrence Erlbaum Associates/Taylor & Tutorials into an Extended Day.” Education Finance Francis Group. and Policy 10(1): 81–116. National Center for Education Statistics. Ladd, Helen F., Charles T. Clotfelter, and John 2015. “The Condition of Education 2015: B. Holbein. 2015. “The Growing Segmentation Charter School Enrollment.” U.S. Department of of the Charter School Sector in North Carolina.” Education. http://nces.ed.gov/programs/coe/ NBER Working Paper 21078. indicator_cgb.asp (accessed January 11, 2016). Lee, Yung Soo, Nancy Morrow-Howell, Melissa Nix, Naomi. 2015. “A Boston Breakthrough: Jonson-Reid, and Stacey McCrary. 2012. “The UP Academy Goes from Failing to First.” The Effect of the Experience Corps Program on 74, August 10. https://www.the74million.org/ Student Reading Outcomes.” Education and Urban article/a-boston-breakthrough-up-academy-goes- Society 44(1): 97–118. from-failing-to-first. Markovitz, Carrie E., Marc W. Hernandez, Oreopoulos, Philip, and Kjell G. Salvanes. Eric C. Hedberg, and Benjamin Silberglitt. 2014. 2011. “Priceless: The Nonpecuniary Benefits of Impact Evaluation of the Minnesota Reading Corps K-3 Schooling.” Journal of Economic Perspectives 25(1): Program. Chicago, IL. http://www.nationalservice. 159–84. gov/documents/research-and-reports/2014/ Pisano, Chris. 2015 “Hundreds Turn Out for impact-evaluation-minnesota-reading-corps-k- Holyoke Charter School Enrollment Lottery.” 3-program. WGGB, March 4. http://www.masscharterschools. May, Henry, Heather Goldsworthy, Michael org/media/news/hundreds-turn-out-holyoke- Armijo, Abigail Gray, Philip Sirinides, Toscha charter-school-enrollment-lottery. J. Blalock, Helen Anderson-Clark, Andrew J. Rahman, Fauzeya. 2015. “Word of Mouth Schiera, Horatio Blackman, Jessica Gillespie, and Major Factor for Parents Charting Charter School Cecile Sam. 2014. “Evaluation of the i3 Scale-up of Course.” Houston Chronicle, November 7. Reading Recovery: Year Two Report, 2012–2013.” Rotherham, Andrew J. 2011. “KIPP Schools: A Philadelphia, PA: Consortium for Policy Research Reform Triumph, or Disappointment.” Time, April in Education. http://dx.doi.org/10.12698/ 27. cpre.2014.RR79. Sass, Tim R., Ron W. Zimmer, Brian P. Gill, and Merseth, Katherine K., Kristy Cooper, John T. Kevin Booker. 2016. “Charter High Schools’ Roberts, Mara Casey Tieken, Jon Valant, and Chris Effects on Long-Term Attainment and Earn- Wynne. 2009. Inside Urban Charter Schools: Prom- ings.” Journal of Policy Analysis and Management ising Practices and Strategies in Five High-Performing 35(3): 683–706. Schools. Cambridge, MA: Harvard Education Press. Setren, Elizabeth. 2015. “Special Education Miron, Gary, Jessica L. Urschel, and Nicholas and English Language Learners in Boston Charter Saxton. 2011. “What Makes KIPP Work? A Study Schools: Impact and Classification.” School Effec- of Student Characteristics, Attrition, and School tiveness and Inequality Institute (SEII) Discussion Finance.” New York, NY: National Center for the Paper 2015.05. http://seii.mit.edu/wp-content/ Study of Privatization in Education. uploads/2015/12/SEII-Discussion-Paper- National Alliance for Public Charter Schools. 2015.05-Setren1.pdf 2015a. “A Growing Movement: America’s Largest Walters, Christopher R. 2014. “The Demand for Charter School Communities.” http://www.public- Effective Charter Schools.” NBER Working Paper charters.org/publications/enrollment-share-10/ . 20640. National Alliance for Public Charter Schools. Wiltenburg, Mary. 2015. “Uncertain Future 2015b. “Estimated Number of Public Charter for Thousands in Charter School Lottery.” Schools & Students, 2014–15.” http://www.public- WYPR, February 13. http://news.wypr.org/ charters.org/publications/open-close-2015/. post/uncertain-future-thousands-charter-school- National Alliance for Public Charter Schools. lottery#stream/0. This article has been cited by:

1. Eunice S. Han, Jeffrey Keefe. 2020. The Impact of Charter School Competition on Student Achievement of Traditional Public Schools after 25 Years: Evidence from National District-level Panel Data. Journal of School Choice 87, 1-39. [Crossref] 2. Beth E. Schueler. 2020. Making the Most of School Vacation: A Field Experiment of Small Group Math Instruction. Education Finance and Policy 15:2, 310-331. [Crossref] 3. Linnea A. Evans, Arline T. Geronimus, Cleopatra Howard Caldwell. 2020. SYSTEMATICALLY SHORTCHANGED, YET CARRYING ON. Du Bois Review: Social Science Research on Race 74, 1-27. [Crossref] 4. Mauricio Romero, Justin Sandefur, Wayne Aaron Sandholtz. 2020. Outsourcing Education: Experimental Evidence from Liberia. American Economic Review 110:2, 364-400. [Abstract] [View PDF article] [PDF with links] 5. Ying Shi. 2020. Who benefits from selective education? Evidence from elite boarding school admissions. Economics of Education Review 74, 101907. [Crossref] 6. Helen F. Ladd, John D. Singleton. 2020. The Fiscal Externalities of Charter Schools: Evidence from North Carolina. Education Finance and Policy 15:1, 191-208. [Crossref] 7. Guido Schwerdt, Ludger Woessmann. Empirical methods in the economics of education 3-20. [Crossref] 8. Magnus Henrekson, Johan Wennström. 2019. ‘Post-truth’ schooling and marketized education: explaining the decline in Sweden's school quality. Journal of Institutional Economics 15:5, 897-914. [Crossref] 9. Philip M. Gleason. 2019. LET THE SEARCH CONTINUE: CHARTER SCHOOLS AND THE PUBLIC INTEREST. Journal of Policy Analysis and Management 38:4, 1054-1062. [Crossref] 10. Jason Giersch. 2019. ‘Desperately afraid of losing white parents’: charter schools and segregation. Race Ethnicity and Education 38, 1-22. [Crossref] 11. Matthew Davis, Blake Heller. 2019. No Excuses Charter Schools and College Enrollment: New Evidence from a High School Network in Chicago. Education Finance and Policy 14:3, 414-440. [Crossref] 12. Jia Wu, Xiangdong Wei, Hongliang Zhang, Xiang Zhou. 2019. Elite schools, magnet classes, and academic performances: Regression-discontinuity evidence from China. China Economic Review 55, 143-167. [Crossref] 13. Nancy B. Hastings, Holley L. Handley. 2019. School-Based Attributes Instrumental in Student Success in a Florida Charter Middle School: a Formative Case Study. Journal of Formative Design in Learning 3:1, 39-52. [Crossref] 14. Lisa P. Spees, Douglas Lee Lauen. 2019. Evaluating Charter School Achievement Growth in North Carolina: Differentiated Effects among Disadvantaged Students, Stayers, and Switchers. American Journal of Education 125:3, 417-451. [Crossref] 15. Matthew A. Kraft, David Blazar, Dylan Hogan. 2018. The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence. Review of Educational Research 88:4, 547-588. [Crossref] 16. Atila Abdulkadiroğlu, Parag A. Pathak, Christopher R. Walters. 2018. Free to Choose: Can School Choice Reduce Student Achievement?. American Economic Journal: Applied Economics 10:1, 175-206. [Abstract] [View PDF article] [PDF with links] 17. Richard Greggory Johnson. Civil Rights 753-759. [Crossref] 18. Aaron K. Chatterji. 2018. Innovation and American K–12 Education. Innovation Policy and the Economy 18, 27-51. [Crossref] 19. Philip M. Gleason. 2017. What’s the secret ingredient? Searching for policies and practices that make charter schools successful. Journal of School Choice 11:4, 559-584. [Crossref] 20. Richard Greggory Johnson. Civil Rights 1-6. [Crossref]