Greetings From Due Diligence, New Jersey

Earlier this week, New Jersey Governor Chris Christie announced that the state will assume control over Camden City School District. Camden will be the fourth NJ district to undergo takeover, though this is the first time that the state will be removing control from an elected local school board, which will now serve in an advisory role (and have three additional members appointed by the Governor). Over the next few weeks, NJ officials will choose a new superintendent, and begin to revamp evaluations, curricula and other core policies.

Accompanying the announcement, the Governor’s office released a two-page "fact sheet," much of which is devoted to justifying this move to the public.

Before discussing it, let’s be clear about something - it may indeed be the case that Camden schools are so critically low-performing and/or dysfunctional as to warrant drastic intervention. Moreover, it's at least possible that state takeover is the appropriate type of intervention to help these schools improve (though the research on this latter score is, to be charitable, undeveloped).

That said, the "fact sheet" presents relatively little valid evidence regarding the academic performance of Camden schools. Given the sheer magnitude of any takeover decision, it is crucial for the state to demonstrate publicly that they have left no stone unturned by presenting a case that is as comprehensive and compelling as possible. However, the discrepancy between that high bar and NJ's evidence, at least that pertaining to academic outcomes, is more than a little disconcerting.

Here is the concrete evidence comprising New Jersey’s public case for the Camden takeover, in a nutshell:

  • Students exhibit among the lowest proficiency rates in the state;
  • Graduation rates are far below the state average;
  • Per-pupil spending is substantially higher than the state average.
None of this tells you a whole lot about the actual effectiveness (or cost effectiveness) of these schools. The graduation and proficiency outcomes are absolute performance measures (and imprecise ones at that). Both are largely a function of student background - that is, they are measures of student performance, not school performance.

For example, according to the American Community Survey, median household income in Camden is roughly $25,000, compared with around $67,000 statewide (for additional perspective, the median in Newark is $32,000). It is among the poorest large cities in the U.S.

"Poverty is not an excuse" may be an effective talking point, but in the context of school performance assessments, especially those used for high-stakes decisions, it is precisely the opposite of the appropriate mindset. The whole idea of measuring school performance is to isolate schools' contribution to students' measured progress, while controlling, to the degree possible, for non-school factors such as students' social and economic backgrounds and circumstances.*

Schools cannot influence which students they are assigned, and a school that serves low-scoring students is not necessarily an ineffective school (though one could/should argue the students require additional help to catch up). If their students enter the system at low testing levels, schools can only control whether they make progress while in attendance. Yet, the NJ “fact sheet” does not contain even a rudimentary attempt to examine the district's growth-oriented outcomes.**

And then there is the “money doesn't matter” refrain that we hear so often. It’s perfectly valid to argue that simply throwing more money at the schools won’t necessarily produce better results – how one spends that money matters enormously. However, comparing Camden's spending and proficiency rates with those of the state overall, and then using that comparison to draw the spurious conclusion that the money is being spent unwisely, suggests a less-than-impressive familiarity with contemporary education finance research.***

To reiterate, the point here is not really about whether Camden schools should be taken over, nor is this discussion intended to suggest that the choice to do so was not deliberated extensively, using a variety of different types of information. Rather, this is about the more basic fact that NJ officials have justified their decision to the public based in large part on the argument that Camden schools are severely ineffective, but their evidence doesn't really come close to supporting that conclusion.

More generally, these are the types of episodes that create sharp discomfort among those who are nervous about the ever-expanding reliance on test scores and other quantifiable outcomes in making high-stakes decisions.

The potential for data to play a productive role in educational policy requires that they are interpreted properly and used in a context-specific manner that is commensurate with their strengths and limitations. New Jersey has provided yet another example of how the latter fails to materialize so often that it is exceedingly difficult to be sanguine about the former.

- Matt Di Carlo

*****

* This is not to say that absolute performance measures should play zero role in high stakes decisions. For instance, it's probably not the best use of finite resources to intervene in schools in which students perform at high absolute levels but make relatively slow progress. See here for a broad brush proposal of how to combine absolute and growth measures in school performance assessment.

** The most sophisticated academic performance "analysis" that one can find anywhere in the public record is buried deep in the court order (I'm not sure if that's the correct term for this document). It consists of a grand total of two individual comparisons of Camden's absolute performance outcomes with those of their "district factor group" (DFG A), a set of around 40 districts that vary widely in their characteristics (inexplicably, the DFGs do not seem to have been updated since the 2000 Census). First, Camden's SAT results suggest that one percent of students are "college ready" according to their SAT scores, compared with about 11 percent in their DFG (see here for problems with interpreting SAT results). Second, without citing actual figures, the order only states that Camden's proficiency rates are "well below" their DFG's average (having undertaken the way-too-laborious process of assembling district-level proficiency rates by DFG, I confirmed, unsurprisingly, that this is true, but Camden's rates are not dissimilar from those of several other districts within their DFG, and I suspect that if one adjusted for basic characteristics like income, a rather different picture would emerge). There is no DFG comparison of graduation rates, and, more importantly, no mention at all of growth-oriented outcomes.

*** This is not the first time New Jersey has exhibited somewhat loose evidentiary standards – see herehere and here for a few previous examples. In fairness, however, they are most certainly not alone.

Permalink

I'd hope that the Governor's office would make these decisions in large part by visiting the schools, talking with informed experts, and making subjective judgments as to whether they system is dysfunctional to the point of making an intervention worthwhile. The poor absolute performance indicates a potential problem, but the most compelling evidence might not be available via data.

Matt and others have done a good job demonstrating that it is very difficult to prove via data that a school or school system is "bad". However, I think that misses the point that these judgments might require significant subjectivity.

The "fact sheet" could be a good tool to give some information that motivates the possibility that there are big problems in the system. Things like video recordings of incompetent teaching, mayhem in school hallways, and interviews with clearly incompetent administrators might be more compelling evidence (assuming those problems even exist!), but, of course, it is not practical (or necessarily desirable!) to produce such evidence.

I used to be amazed when I read defenses by academics of schools that I've visited that were clearly awful. They weren't clearly awful because of bad scores. They were clearly awful because the classrooms and hallways were in a state of disarray, many of the students were not paying attention, many of the students were not understanding the material, many of the teachers had no control over their classrooms, the administrators were incompetent as judged by asking them basic questions and listening to their answers, etc. It's sort of sad, even though I understand the point that the data "proves" nothing.

It would be a fascinating post to read Matt's subjective thoughts if and when he visits some of these allegedly terrible school systems. The data is great, but it is often insufficient to make optimal decisions. The answer is not to assume, implicitly or explicitly, that the status quo is the best path without data-based proof.

Permalink

I noticed a similar issue in Massachusetts.

The Justice Department decreed in 2011 that the state was failing ELL students. Why? They used absolute score data.

Yet the growth data shows that Massachusetts ELLs grow at faster rates than even the average white Massachusetts student.

Anyway, the DOJ action led to a settlement, soon to be fully implemented, which means 26,000 public schoolteachers required to undergo 100 hours of training. The teachers unions have tried to amend this, but to no avail.