Schools' Effectiveness Varies By What They Do, Not What They Are

There may be a mini-trend emerging in certain types of charter school analyses, one that seems a bit trivial but has interesting implications that bear on the debate about charter schools in general. It pertains to how charter effects are presented.

Usually, when researchers estimate the effect of some intervention, the main finding is the overall impact, perhaps accompanied by a breakdown by subgroups and supplemental analyses. In the case of charter schools, this would be the estimated overall difference in performance (usually testing gains) between students attending charters versus their counterparts in comparable regular public schools.

Two relatively recent charter school reports, however – both generally well-done given their scope and available data – have taken a somewhat different approach, at least in the “public roll-out” of their results.

The first is the widely/over-cited CREDO study, which included data on charters in 16 different states. The aggregate effect of these charters, across all states, was statistically discernible and negative in both reading and math, but the actual size of these differences was tiny.

The researchers reported this finding – both in the press release and, of course, in the report itself – but the “featured” characterization of their results was the breakdown of charters by whether they were significantly more effective (17 percent), less effective (37 percent) or no different (46 percent) from the regular public schools to which they were compared. You’ve probably heard these proportions dozens of times.

Similarly, a more recent report – this one looking at charter management organizations (CMOs) – also found (and reported) an overall impact that was essentially nil (positive but not statistically discernible), but the two organizations that did the analysis (Mathematica and the Center for Reinventing Public Education) also featured the “breakdown” approach, reporting that, among the middle schools run by the 22 CMOs included in their main analysis, about half did discernibly better, with only a handful doing so by large margins. The rest either had negative effects or were not statistically different (supplemental results focused on a smaller sample of CMO-run high schools reached similar conclusions).

Both analyses found little or no overall charter effect, but both emphasized (for the media and public) the type of breakdown described above – how many schools or organizations did better, worse and were no different, at least in terms of test-based outcomes.

There is a very good rationale for this approach. Both CREDO and the CMO study were looking at a diverse group of schools (especially the former), and presenting the breakdown illustrates that the estimated test-based effects of these schools (and CMOs, which was the unit of analysis in the Mathematica/CRPE report) varied.

In contrast, had either report instead featured the overall effect (or, more accurately, non-effect), many people would have misinterpreted this to mean that none of the charters did better, or they would have misinterpreted a statistically significant impact to mean one that is large enough to be educationally meaningful (this was not the case). Either way, they might have ignored the variation, which is arguably the most important aspect of the findings.

At the same time, the “breakdown approach" in some respects discourages any conclusions about charter schools in general. The effects of virtually any type of intervention will vary – some good, some bad, some no different. It’s unlikely that there will ever be a large-scale charter school study that reaches a different conclusion, so this approach might not tell you much about that intervention’s “value."*

So, in a sense, this particular “framework” almost ensures that any set of charter results – no matter how lackluster – can never be interpreted unfavorably. If all charter supporters need to “prove their case” is a small set of schools that marginally outperform comparable regular public schools, then they will never be disappointed. And, of course, the converse is also true: No charter study is likely to find uniformly positive impacts, which means that opponents can always point out that some don’t seem to do very well.

To be clear, my intention here is not accuse anyone of hypocrisy, or to suggest that the overall charter results, which tend to be small or nil, show that charters aren’t working. Quite the contrary, I would argue that the “breakdown approach” is the better way to present charter findings for public consumption (though one should, of course, always review all results).

The reason is that charter schools (or the CMOs that run them) are a somewhat unique form of educational program, in that they share a few key features, but tend to be unusually diverse many types of policies and practices. As I’ve argued many times, the true value of charter schools is that they enable the kind of variation in approaches that is usually not found within districts, and these policies/practices can be assessed (in terms of how they are associated with different outcomes) in a manner that can inform educational policymaking.

Presenting results as did CREDO and Mathematica/CRPE highlights this variation, whereas overall estimated effects do not.**

But we should also be clear about what these CREDO/CMO breakdowns illustrate. They do not, by themselves, suggest that we should open more charter schools any more than they suggest we should close charters and replace them with regular public schools.

What they do seem to indicate is that schools’ effects on test scores may vary less by what they are (e.g., charter versus regular public school) than by what they do (e.g., specific policies and practices). Although the research attempting to explain this variation is just starting to get going, and personnel, “culture," and other unmeasurable factors always play an important role, charters that get strong results seem to share certain characteristics, most notably very long school days, intensive tutoring programs, strong discipline policies and private funding (also see here, and our policy brief on charter evidence). If the research suggests anything about charter proliferation (and, again, there’s still much to be done in this area), it’s that opening more charters that don't share these characteristics (and the resources they require) may not be a very good bet.

So, if it continues, I’m all for the presentation of charter findings in a manner that highlights their varied outcomes, so long as the interpretation is not some empty argument about “taking the good with the bad," but rather one focused on identifying common practices associated with success, and sharing that knowledge with all schools, regardless of their governance structure.

- Matt Di Carlo

*****

* For example, in the CRPE press release about the CMO report (I received it via e-mail, but could only find it reprinted on a different site, the one to which I’ve linked), the headline was "CMOs can significant increase students’ chances of high school graduation, college enrollment." This characterization, while technically true ("can increase”), is less than informative – you might say the same thing about any program, no matter how negative its effects (Mathematica’s press release was straightforward).

** The same might be said for a focus on presenting results by subgroup – for example, some studies, including CREDO, that find negative or no charter effects on the whole also reveal that charters do better with lower-income students (though, in CREDO's case, the difference [.01 standard deviations in math and reading] was essentially too small to be meaningful, while the CMO analysis found no difference). Nevertheless, the disparate estimated impact by subgroup (and for non-testing outcomes, of course) can be important for policy purposes. It's also worth noting that the 22 organizations are selected for inclusion in the CMO report based on their running multiple schools in multiple locations over a period of years, and so the generalizability of their results to charter schools (or CMOs in general) is questionable (see here).