In a previous post about seniority-based layoffs, I argued that, although seniority may not be the optimal primary factor upon which to base layoff decisions, we do not yet have an acceptable alternative in most places—one that would permit the “quality-based” layoffs that we often hear mentioned. In short, I am completely receptive to other layoff criteria, but insofar as new teacher evaluation systems are still in the design phase in most places, states and districts might want to think twice before chucking a longstanding criterion that has (at least some) evidence of validity before they have a workable replacement.
To its credit, TNTP’s policy brief relies heavily on an all-too-often ignored source of wisdom on teacher policy issues: teachers themselves. They surveyed 9,000 teachers in “two large urban districts," soliciting opinions about seniority-based layoffs and possible alternatives. But these were voluntary surveys (total response rate was roughly 40 percent), and TNTP makes not a single mention about the non-random nature of these samples, and how this may have distorted their results.
Instead, the report makes statements, such as: “Teachers in these two districts overwhelmingly rejected quality-blind layoffs." This is an irresponsible presentation of survey results, the same one they made in "The Widget Effect," a previous TNTP report. (In that case, they presented results from a voluntary, online survey of teachers in 14 districts with no discussion of how these data were unlikely to be an accurate reflection of the views of all teachers in those districts.) Now, it’s possible that a majority of teachers in these two districts actually do oppose the use of seniority as a criterion in layoffs, but these results don’t prove anything either way. It is highly likely that teachers who chose to respond to the survey hold different opinions than those who did not (one might speculate that those with negative views of seniority were most likely to respond), and a few simple statistical procedures could have examined these issues.
(For the record, the report also repeats the incorrect argument that seniority-based layoffs disproportionately affect higher-poverty schools, which I discuss here and here, but they are diligent in their treatment of the literature on teacher experience.)
It’s still interesting, despite the likely selection bias in the sample, to see what the surveyed teachers have to say—we don’t need to know what the “typical teacher” thinks in order to get some good ideas about how a new layoff system might be designed. Here’s a breakdown of the percentage of teachers in each district who said that they would support various layoff criteria (taken directly from the report):
Classroom management and teacher attendance rank at the top in both districts. “Instructional performance based on evaluation rating," seniority, and school leadership roles also rank highly.
Based on these results, TNTP proposes a “roadmap” for a new layoff formula consisting of these factors, all but one (seniority) averaged across teachers’ past three years (see the report for more details about the scoring and weighting, as I will be focusing on the criteria they offer). Here’s the breakdown of factors and weights in this illustrative model:
Let’s start with the most lightly-weighted factors that TNTP proposes in addition to seniority (the authors assert that the weights are just suggestions, and might change based on feedback/research). The first is attendance (20 percent). This measure has the virtue of being easily measured (and probably readily available in most places), but it carries the risk of perverse incentives and unfairness. The rationale for attendance, according to TNTP, is that teachers who are absent more often “hurt” student performance, since their students lose more days of instruction (and substitute teachers are not nearly as effective). Perhaps so, but if our system is supposed to measure teacher quality, it hardly seems fair to risk penalizing teachers for being ill, especially since, in places where attendance doesn't vary much, the difference between getting fired and keeping one’s job might come down to a single sick day. Yes – some teachers, like all workers, abuse their sick days, but there’d be no way to tell the difference. In addition, attaching stakes to the use of sick days creates an incentive for teachers to come to work when they may be contagious, benefitting no one. In short, it's an idea, but I remain unconvinced it's a good one.
Another 20 percent in the model consists of taking on extra responsibilities. This seems sensible—teachers who take on additional duties might be viewed as more “valuable” to the school, and given some credit for that service. Although there might be unintended consequences (teachers volunteering for duties they aren’t qualified for), and a few measurement issues to work out (how to compare different duties that vary widely in their time and skill requirements), I think this is a smart idea (one that could potentially be implemented immediately).
But the bulk of the proposed TNTP formula comes down to teacher evaluations, which comprise 60 percent of their formula (in schools/districts that maintain separate measures of classroom management skills, 20 of this 60 percent would consist of this sub-rating). Whether or not classroom management is “pulled out” of the evaluation and given extra weight, evaluation scores are a logical choice for the primary factor in any new layoff formula.
As mentioned above, however, most current evaluation systems are inadequate to the need, at least at this time, and initiatives to develop better evaluation systems are still underway. It will be some time before these systems are operational, and even longer if three years of data are to be used (as TNTP wisely suggests).
Now, here’s the kicker.
The TNTP report acknowledges that new evaluation systems aren’t ready for use, but argues that districts should go ahead and use their current evaluation systems for layoffs anyway. I find this curious indeed, given the aforementioned “Widget Effect” report, in which the TNTP lambasted the 14 districts it examined for having virtually useless evaluation systems, since they gave “satisfactory or above” ratings to 98 percent of teachers. (For the record, despite that report’s failure to mention this in its executive summary, the 98 percent finding applied only to tenured teachers; ratings for probationary teachers were much worse, though TNTP only reported partial results for them).
Putting aside how strange it is for TNTP, after releasing the “Widget Effect," to advocate for using evaluation scores in high-stakes decisions, their argument seems to be that we can be sure that the few teachers who get “unsatisfactory” ratings are really bad, and we might as well use this information. It sounds reasonable, but there’s a serious problem.
The fact that so many tenured teachers get “satisfactory” ratings suggests that many principals are not currently performing meaningful evaluations (that is, if the 14 "Widget" districts are representative, an untested assumption that pervades the coverage it gets). If so, there is systematic bias in most current evaluation systems: There is no way to know whether those teachers who do get “unsatisfactory” ratings are simply those whose supervisors “actually evaluate them." If so, layoff decisions based on these evaluations would be tantamount to punishing schools with administrators who take evaluation seriously, resulting in a disproportionate number of layoffs and turnovers in those schools (and, by the way, among non-tenured teachers, who get worse ratings, perhaps because they’re “actually” being evaluated). More generally, there is little reason to believe (and no evidence to support) the argument that evaluation scores with this level of systematic bias are preferable to current formulas, whether in terms of how many “good teachers” will avoid being laid off, or in terms of simple fairness.
Still, overall, you have to give the TNTP credit for actually putting something out there; their report is at least a starting point with a few good ideas on both possible measures and configuration. And there is absolutely a strong case for incorporating reliable “quality-based” alternative measures into layoff formulas with heavy weights. In places that currently have such measures, this is an option right now. But, at least as far I can tell, it’s not the case in most states and districts, and the “anything is better than what we have now” is not persuasive. It calls for the kind of overly rushed, slapdash policymaking that is the antithesis of “data-driven."
I am therefore right back where I started, asking the same question: Aren’t most of the states and districts currently moving to eliminate seniority-based layoffs doing so without a viable alternative?