Estimated Versus Actual Days Of Learning In Charter School Studies

One of the purely presentational aspects that separates the new “generation” of CREDO charter school analyses from the old is that the more recent reports convert estimated effect sizes from standard deviations into a “days of learning” metric. You can find similar approaches in other reports and papers as well.

I am very supportive of efforts to make interpretation easier for those who aren’t accustomed to thinking in terms of standard deviations, so I like the basic motivation behind this. I do have concerns about this particular conversion -- specifically, that it overstates things a bit -- but I don’t want to get into that issue. If we just take CREDO’s “days of learning” conversion at face value, my primary, far more simple reaction to hearing that a given charter school sector's impact is equivalent to a given number of additional "days of learning" is to wonder: Does this charter sector actually offer additional “days of learning," in the form of longer school days and/or years?

This matters to me because I (and many others) have long advocated moving past the charter versus regular public school “horserace” and trying to figure out why some charters seem to do very well and others do not. Additional time is one of the more compelling observable possibilities, and while they're not perfectly comparable, it fits nicely with the "days of learning" expression of effect sizes. Take New York City charter schools, for example.

In 2009, as many of us remember, there was a highly-publicized analysis released, which focused on the performance of oversubscribed NYC charters (vis-à-vis the city’s regular public schools), and was therefore able to exploit the random admission lotteries in an experimental research design. The results were packaged using a (rather misleading) conceptualization of the “Scarsdale-Harlem achievement gap," which I discussed here, but let’s see how things look using the “days of learning” metric.

The NYC lottery study found statistically discernible, positive annual impacts in math (0.09 standard deviations) and reading (0.06 standard deviations). These are moderate but, in my view, educationally meaningful effect sizes, certainly large enough to warrant attention.

By the CREDO “days of learning” conversion, these coefficients are equivalent to around 40 additional days in reading and about 70 in math.

However, one of the very useful (and rather rare) features of this particular NYC analysis was that it examined whether charter impacts were associated with a bunch of specific, observable school policies and practices. One of those -- among the only ones that was correlated with performance -- was the length of the school day and year. The report notes that the NYC charters offered, on average, a 192 day school year, compared with 180 in regular public schools, as well as, on average, an eight-hour day, about 90 minutes longer than that of regular public schools.

If you do the simple math, NYC's oversubscribed charters offer, on average, 31 percent more time in school, which is the equivalent of roughly 56 days (using the 6.5 hour day of regular public schools). If we compare this with the overall impacts that I converted to “days of learning," we see that, on average, the estimated additional days of learning charters provided in reading (40) was lower than the actual days they offered (56). In math, it was higher (70 versus 56), but not by much. In other words, interpreting the finding that students pick up the equivalent of 40 or 70 extra "days of learning" is a bit different when you consider that the students were actually in these schools for the equivalent of 50-60 extra days (on average).

Moreover, the report found that oversubscribed NYC charters' estimated impact was not statistically significant in science and social studies. If we follow social scientific convention and interpret these non-significant results as nil, you might go as far as saying that NYC charters, despite all the additional time, were still unable to produce better outcomes (I wouldn't make this argument, especially since the coefficients were large and positive, but it's not entirely indefensible).

Again, there's an important conceptual difference between "days of learning" as presented in reports like CREDO's and the number of hours/days in the school year. And this is just one city during one time period (I chose NYC because I couldn’t find any other single-location studies that tallied up school time [though see this excellent Mathematica report]).

Nevertheless, CREDO's promotion of the "days of learning" metric receives a great deal of media attention and, when overall findings are positive, they can sound very impressive expressed in this manner. For instance, as mentioned above, 0.05 s.d. is normally interpreted as a moderate effect size (and some would characterize it as small), but it sounds rather grand when expressed as "36 days of learning," which is almost two months of school. It's therefore important, when possible, to try and put these estimates in some kind of context.

And, more generally, the reality is that potentially important factors such as time and resources are infrequently addressed in charter school studies, and this is particularly salient when it comes to high-profile charter chains such as KIPP schools, many of which spend more money and provide as much as 40-55 percent more time than regular public schools in the districts where they’re located.

As I've said before, we should not -- indeed, cannot -- attempt to "explain away" the success of the small handful of operators that have been shown consistently effective (money and time must be spent wisely), but we should also bear in mind that there are reasons why these schools appear to work, and some of them are observable.

And I personally would love to see more researchers begin to take on the (admittedly very difficult) task of collecting these types of school-level variables, in order to try and get a better sense of why charter schools' measured performance varies so widely, and how that might inform all schools.

- Matt Di Carlo
Permalink

Confusing test scores with learning is the first error here. I guess you can ponder building a better test prep factory like this opinion piece does but... so? The writer appears to go along with the political narrative that we must reject. Sorry, but no more using test scores to mislead the public, increase donations to one think tank or another or push a political agenda. We must reject the long held myth that test scores matter or equal learning. If I spend extra time hitting a mule maybe it will travel an extra mile.

Permalink

Interesting. I'd like to see some analysis if there are similarly different educational outcomes for districts that mandate different amounts of instructional time. My hypothesis (based on some isolated examples) is that there are not, and that simply adding more days to the school calendar would not, by itself, raise scores. My guess is that you need two ingredients: a better system for instruction (which has many variables), and more instructional days.

Permalink

This is highly useful analysis, Mr. Di Carlo: thanks. How certain schools, chains, districts, and other educational jurisdictions achieve positive measurable effects, including improved test scores, is a candidate for the Holy Grail of educational research, and many people blow much wind about without having any other effect. Your research's impact applies to questions well beyond the KIPP effect; it is consistent with how certain Asian countries, for example, win PISA contests (for what they are worth) without being particularly efficient or satisfactory to their citizenry and without being especially admirable in any other way.