Digging For Data In The Garden State

In January, the New Jersey Department of Education released a report titled, "Living Up to Expectations: Charter Schools in New Jersey Outperforming District Schools." It consisted of a list of charter schools and their students’ aggregate proficiency rates by grade, along with comparisons with the rates of the regular public school districts in which they are located. The state then tallied the number of charters with higher rates (79 percent in language arts, and 69 percent in math), and concluded - in a press release - that this represented evidence of superior performance. The conclusion was reported without scrutiny. Later that same day, NJ Governor Chris Christie formally announced his plan to expand the state’s charter school sector.

In a short post that evening, I pointed out the obvious fact that the state’s analysis was wholly inadequate to demonstrate charter performance – good, bad or indifferent – relative to comparable regular public schools. Rutgers Professor Bruce Baker did the same, and also presented a school-level analysis showing that there was no difference.

Christopher Cerf, the state’s acting education commissioner, decided to stand by the suspect results, basically saying that they were imperfect but good enough to draw the conclusions from.

It was an astonishing position.

Isolating the effect of charter schools versus regular public schools is an exceedingly complicated endeavor. One must account for a huge array of observable and unobservable factors, including student characteristics, attrition, selection bias, and resources. It is, of course, impossible to control for everything, and that is why the research literature on charter effects is highly complex. The better analyses rely on sophisticated statistical models, longitudinal student-level datasets, and, when possible, experimental research designs using random assignment.

The state's analysis consisted entirely of subtraction. The conclusions drawn from it offend the most basic principles of empirical research. Christopher Cerf is a smart guy, and I’m guessing that he knows this.

But for the efforts of one columnist – Bob Braun of the New Jersey Star Ledger – this issue would have gone largely unnoticed in the mainstream press. In a series of stories (also here, here and here), Braun pressed the state on its unfounded conclusions, and repeatedly asked that they release more detailed data, finally resorting to filing a public records request. The state delayed for weeks, and then refused to release data linking testing performance to school poverty and other student characteristics, arguing that it is under no legal obligation to produce analyses of student achievement by income (they also claimed that some of the data do not exist). For his part, Cerf called Braun’s request for an interview “transparently silly," and claimed that his “anger and bias” compromised his objectivity.

Yesterday, Cerf and the state responded a bit more productively. He spoke at a state board of education meeting, acknowledging that the data "...are not what you might call nuanced," and that the issue requires “deeper analysis." He also announced that the state, in an effort to “increase transparency," would release more data online and commission an independent study of NJ charter performance “as soon as humanly possible."

Nevertheless, lack of nuance notwithstanding, Cerf continued to say that he stands by the original report, and repeated his conclusion that the state’s charters are outperforming its regular public schools. He specifically cited the performance of Newark charters, perhaps based on a an analysis by the organization Advocates for Children of New Jersey (one that was only marginally more sophisticated than the state’s).  Finally, Cerf made several references to the “relentless press attention” given to this episode, apparently referring to the Star-Ledger’s coverage.

This whole affair may be relatively unimportant in the grand scope of things, but it is still instructive. I am reminded how, back in January, Governor Christie was sent my original post on Twitter, and he responded as follows: "Just read it. Same old, warmed over union attacks sponsored by an institute named after union leader. Oh so objective! Thx”

Putting aside how strange it is to be accused of being non-objective by Chris Christie, of all people, he is of course partially correct – I do work for an institute named after a former union president. In my post, however, I specifically stated that I did not know how New Jersey charters performed this year. They may actually have done better than comparable district schools. Or they may have done worse. Or there may be no difference at all. My only point was that the analysis did not prove anything one way or the other. The name of the organization I work for doesn’t change these basic facts.

And neither will this new study that the state has commissioned. If the analysis ends up concluding that charters did indeed outperform regular public schools in NJ, I suspect (but am not certain) that Cerf will imply that he has been vindicated. But, at least to me, this is not about whether charters got higher test scores. It’s about how a state agency released a fourth-grade analysis on the same day its governor announced a policy “supported” by the results of that analysis. It’s about their standing by their study, even now, when everyone who has even a passing familiarity with research methodology knows that it proves nothing.

And, finally, it’s about the ever-growing politicization of education research. Governor Christie’s claim (on Twitter) that my pro-union viewpoint somehow invalidates my characterization of the facts was tactless and dismissive. But, in a sense, I am forced to admit that I’m beginning to feel the same way. Even knowing nothing about it, I have little faith in the forthcoming “independent” analysis. This may not be fair, but I have trouble believing that, given the political consequences at this point, the state would ever release a report that showed charter schools’ test scores are no different or worse than those of comparable regular public schools. I am suspicious that they will draw grand conclusions from small effect sizes, or that they will once again rely on simplistic methods, instead of the models that are necessary for making these comparisons. And I will not be satisfied until the data are made public, so that I and other researchers can perform our own analyses.

In short, even though I always try to respond with substance rather than ad hominem arguments, I may be just as guilty of pre-judging. And that makes me sad.

But there’s at least one crucial difference. Governor Christie and Commissioner Cerf are state officials who are supposed to put their personal biases aside in choosing the highest-quality evidence to guide their decisions on what’s best for millions of New Jersey schoolchildren.

I am just a researcher at an institute named after a union leader.