Recent Evidence On The New Orleans School Reforms

A new study of New Orleans (NOLA) schools since Katrina, published by the Education Research Alliance (ERA), has caused a predictable stir in education circles (the results are discussed in broader strokes in this EdNext article, while the full paper is forthcoming). The study’s authors, Doug Harris and Matthew Larsen, compare testing outcomes before and after the hurricanes that hit the Gulf Coast in 2005, in districts that were affected by those storms. The basic idea, put simply, is to compare NOLA schools to those in other storm-affected districts, in order to assess the general impact of the drastic educational change undertaken in NOLA, using the other schools/districts as a kind of control group.

The results, in brief, indicate that: 1) aggregate testing results after the storms rose more quickly in NOLA vis-à-vis the comparison districts, with the difference in 2012 being equivalent to roughly 15 percentile points ; 2) there was, however, little discernible difference in the trajectories of NOLA students who returned after the storm and their peers in other storm-affected districts (though this latter group could only be followed for a short period, all of which occurred during these cohorts' middle school years). Harris and Larsen also address potential confounding factors, including population change and trauma, finding little or no evidence that these factors generate bias in their results.

The response to this study included the typical of mix of thoughtful, measured commentary and reactionary advocacy (from both “sides”). And, at this point, so much has been said and written about the study, and about New Orleans schools in general, that I am hesitant to join the chorus (I would recommend in particular this op-ed by Doug Harris, as well as his presentation at our recent event on New Orleans).

But I will nevertheless offer just a few quick points (none of which, I should say, is especially original). First, evaluating the New Orleans reforms is even more complicated and difficult than usual (e.g., the choice of comparison schools, tracking students over time pre- and post-storms, etc.). That said, I think this Harris and Larsen analysis, along with some prior evidence (e.g., CREDO 2013), strongly suggests that New Orleans schools are more effective in raising test scores now than they were prior to the hurricane(s).

One could very easily raise a number of questions about whether this improvement is accompanied by increased effectiveness in other areas (and, as usual, about the magnitude of the impact itself). These are important questions, some of which are addressed in the ERA analysis, but the test-based results are compelling. This should not be dismissed or downplayed.

Second, although I personally am not one who believes that policies or schools are successful only if their impact is stronger among disadvantaged students, this is always an important issue given the large discrepancies between subgroups, and it bears noting that the estimated test-based “effect” of the NOLA reforms appear to be somewhat mixed in this area. For example, while all subgroups of cohorts are scoring at least slightly more highly on tests than they did prior to the storms, improvement is weakest among disadvantaged students. Again, this to me does not at all signal failure, nor does it necessarily detract from the apparent test-based improvement, but it is an issue that deserves greater empirical and policy scrutiny going forward, particularly in a city such as New Orleans, with its higher poverty student population (and large selective school sector

Third, there has been much discussion about whether the New Orleans transformation is a “model for the nation.” Putting aside the obvious point that “model” is a somewhat awkward framing for what happened after the destruction of a city, there is certainly a place here for asking what these results mean for education reform in general.

One of the interesting aspects of NOLA’s “transformation” is that it includes a pretty wide variety of policy measures that span the education tribal continuum, and this study, of course, cannot adjudicate which policies drove the results and which did not. In post-hurricane New Orleans, not only were there state takeovers and rapid charter school proliferation, but also a drastic, albeit partially temporary increase in funding (public and private), the nullifcation of the collective bargaining agreement, school building reconstruction, increased school specialization, an influx of teachers and leaders, etc. (And let us not forget the intense national focus on and pressure to improve testing outcomes that has become an unfortunate reality in cities that try large scale education reforms.)

Opinions as to which of these and other policies/factors were the primary drivers vary (usually by the opiner’s policy preferences). I think it is fair to assume that a couple of these policies, namely the drastic reorganization of the city’s schools from a “traditional district” to its current structure, are more influential than, say, new school buildings. But it is equally plausible to assume that factors such as the right people and sufficient funding can make a huge difference in whether or not these more sweeping changes work, and for whether they would be successful elsewhere.

Finally, the reality is that the New Orleans transformation is so extensive that it is still a bit too early to get more than an initial idea of the impact. The data for the ERA study end in 2012. This is only about seven years post-storm(s). During the first year or two, many schools were closed. Total enrollment in 2012 was still around 65-70 percent of what it was before the storms, and it was still growing. Even today, the organizational structure of the city’s schools remains in flux, and many of the schools operating are still quite young. In short, the system in many respects is still transitioning to “normal” phase (to whatever degree you can call it “normal”). 

The work of ERA and others will continue, and should be followed closely (and, as yet, there really are only a tiny handful of rigorous analyses of the NOLA reforms). In the meantime, there is certainly cause for optimism about the impact of the NOLA reforms, but also a need for patience and additional evidence before drawing strong conclusions. Short term testing results alone are not how one should judge the reorganization of a major city school district.

Issues Areas