Describing, Explaining And Affecting Teacher Retention In D.C.

The New Teacher Project (TNTP) has released a new report on teacher retention in D.C. Public Schools (DCPS). It is a spinoff of their “The Irreplaceables” report, which was released a few months ago, and which is discussed in this post. The four (unnamed) districts from that report are also used in this one, and their results are compared with those from DCPS.

I want to look quickly at this new supplemental analysis, not to rehash the issues I raised about“The Irreplaceables," but rather because of DCPS’s potential importance as a field test site for a host of policy reform ideas – indeed, the majority of core market-based reform policies have been in place in D.C. for several years, including teacher evaluations in which test-based measures are the dominant component, automatic dismissals based on those ratings, large performance bonuses, mutual consent for excessed teachers and a huge charter sector. There are many people itching to render a sweeping verdict, positive or negative, on these reforms, most often based on pre-existing beliefs, rather than solid evidence.

Although I will take issue with a couple of the conclusions offered in this report, I'm not going to review it systematically. I think research on retention is important, and it’s difficult to produce reports with original analysis, while very easy to pick them apart. Instead, I’m going to list a couple of findings in the report that I think are worth examining, mostly because they speak to larger issues.

But, first, there are two things I won’t be discussing, and I think it bears quickly saying why. First, I am going to pass on talking about any of the teacher survey findings. About one-quarter of DCPS teachers completed TNTP’s survey, which queries them on various issues, including working conditions. Others can disagree, but I am not comfortable drawing conclusions about DC teachers as a whole from this non-random sample, especially given that there's no effort to compare it to all DC teachers. There's just no way to know if and how respondents differ from non-respondents (and TNTP's ties to DCPS make this situation even more complicated).*

Second, I can’t really talk too much about the effects of recent D.C. reforms on teacher retention. That’s because , despite a couple of TNTP’s conclusions, this analysis doesn’t tell us much of anything about the effects of recent DCPS policy changes.

Isolating the impact of a specific policy or set of policies is always very difficult, but, in this case, there’s an even more basic problem – TNTP is only looking at one year of data (2010-11). In most cases, using raw tabulations to speculate whether a given set of recent reforms is affecting an outcome like retention requires, at the very least, looking at that outcome over time - e.g., comparing it with a baseline prior to the policy’s implementation.

There is, arguably, one partial exception in this case, and it pertains to the finding that DCPS retains twice as high a proportion of “high-performing” teachers as “low-performing” teachers, with both categories defined in terms of ratings on the district’s evaluation system (IMPACT).

It's a safe bet that this has changed in recent years, and that the primary driver of this change is, as TNTP does note (largely with euphemisms), the fact that DCPS dismisses teachers based on their IMPACT ratings. In other words, the rather unsurprising conclusion here is that firing teachers decreases retention rates.

More interesting are the rates for D.C.’s “high-performing” teachers, as well as those of their counterparts in the four other (unnamed) districts TNTP studied (the performance categories are defined differently in each district). Again, this inter-district comparison can’t really tell us much about the impact of D.C.’s reforms, including very large bonuses ($25,000) for teachers who get the top rating for two consecutive years (I am also assuming that these breakdowns include the full sample of around 3,600 DC teachers, not just those who completed the survey).**

It does, however, suggest that retention is actually higher than one might anticipate among the top-rated teachers in most of these five districts, including 88 percent in both DC and “District B," 92 percent in “District C," and 94 percent in “District D." There’s always room for improvement; but, given that these are all large, diverse districts, one might characterize these results as encouraging (at least to the degree that the performance measures are on target).

(You really have to keep in mind, though, that the current recession means that any analysis of absolute retention/mobility using data from the past few years must be regarded with caution. Most basically, retention tends to be higher when the labor market is bad shape.)

TNTP's point, in contrast, is that the retention rates are similar between the high- and low-rated teachers in the four non-DC districts. Here, I would note that virtually every prior study of which I'm aware finds that teachers who exit do tend to be less effective (in terms of value-added) than those who remain, though the relationship often varies by experience and other teacher/school characteristics (see here, here, here and here). Actually, in four of five of TNTP's districts, retention is lower among the lower-rated teachers (as always, the "gaps" might be much larger with a different definition of the performance categories - e.g., TNTP's "irreplaceables" are better described as "probably above average"; see our previous post).

Again, none of this means that the situation cannot or should not be improved, but it doesn't square with the idea that there's no association between estimated effectiveness and attrition/mobility.

Then there's the more important issue of affecting these outcomes. TNTP likes to focus on what it calls “low-cost retention strategies," such as “high expectations," and principals encouraging teachers to stay and recognizing their work. I don’t doubt that those factors can play a role, and there is some limited research supporting this (limited, perhaps, because these are difficult conditions to measure directly).

More generally, the literature (see here for a review) finds that the reasons teachers leave vary by characteristics and work setting. On the whole, “bread and butter” issues like salary play a fairly well-established role (though performance pay has a mixed record), as do other policies, such as induction. There is also some evidence that teacher attrition and mobility are attributable to factors less directly within the scope of policy, such as student characteristics and the proximity of the school to teachers’ homes.

One big question from TNTP’s perspective is whether the factors influencing actual or intended attrition/mobility vary by estimated effectiveness. This seems very plausible, but it is, as yet, not very well understood. There's also, by the way, an important distinction between teachers leaving their district for another teaching job versus leaving the profession altogether, in terms of both why these two forms of mobility occur as well as how they might be addressed (this report does not differentiate between them, almost certainly because the data do not permit doing so).

A second finding in this TNTP report worth mentioning is the distribution of teachers’ ratings by the poverty of their students. But there's a snag here: The poverty measure they use for most of these figures is self-reported. That is, rather than using district data, TNTP asks teachers how many of their students come from high-poverty backgrounds. This not only leaves room for error in the rates teachers report, but it also reintroduces the non-random sample issue (i.e., the breakdowns are limited to teachers who completed the survey).

TNTP does, however, present an additional graph (Figure 9, on page 12), which provides teachers' value-added ratings (rather than their IMPACT ratings) by "school poverty level." I believe - but am not certain - that this is "actual" school poverty level (taken from district datasets), rather than teachers' self-reported rates.***

In any case, let's just take the graph at face value, as it's an important issue and there's is a very interesting shape to the distribution. When you sort schools into poverty quintiles (groups of 20 percent), there are somewhat comparable spreads of high-, middle- and low-rated teachers in the four highest-poverty quintiles (the 80 percent of schools with the highest poverty rates). However, in stark contrast, there are virtually no low-rated teachers in the schools within the lowest-poverty quintile, and about three times as many highly-rated teachers as in the other groups. TNTP notes (correctly, in my opinion) that some of this might be bias in the estimates, but also that it's unlikely the differences are entirely due to model specification.****

The unequal distribution of teachers across schools with different poverty rates also squares with the research on retention – for instance, teachers tend to leave higher poverty schools at higher levels. Similarly, a couple of recent, direct analyses of the distribution of test-based teacher effectiveness across schools find the same pattern (though it is often rather weak). Thus, if you take the graph (and the measures) at face value, the fact that higher-rated teachers appear concentrated at the top is troubling, but not entirely surprising. And it illustrates the importance of working conditions - including student and school characteristics - in shaping teacher retention.

So, overall, this TNTP report throws out a couple of interesting findings describing the situation in D.C., but they are of limited value in explaining the observed patterns in terms of the impact of recent reforms, or suggesting how to proceed moving forward. The latter issues will have to be addressed with sharper tools, over a period of several more years.

- Matt Di Carlo

*****

* Unless I’m missing something, TNTP does not present any basic characteristics of their survey sample, to say nothing of trying to compare them with those of the larger DCPS group (the same was true of the districts in the "Irreplaceables" report). This is always required (even with random samples), but it’s especially important given that TNTP is an advocacy organization, one which is very well-known in DCPS (for example, it was founded by former Chancellor Michelle Rhee), and places many teachers in the district. Attitudes toward them can be very contentious, and it's certain that at least some people decided for or against completing the survey based in part on their views of TNTP and/or their policy stances.

** Note that if we did adopt the (incorrect) assumption that eyeballing these raw, single-year inter-district comparisons can tell us whether DC reforms are “working," we might conclude that they aren’t doing much among highly-rated teachers, given that the rate is comparable or higher in the other four districts, none of which employs the full set of reforms adopted by DC. On the other hand, just to be clear, I obviously don't doubt that teachers who receive the $25,000 bonuses are, all else being equal, more likely to stick around as a result. However, a fuller assessment of this policy would have to address very complicated questions, such as the quality of the ratings used to reward the bonuses and whether they are more cost-effective than alternative ways to allocate those funds.

*** The wording of this graph's title - "school poverty level" - seems to suggest that these may be "actual" poverty rates insofar as the self-reported rates are not school poverty levels, but rather student poverty levels." Also, there are two possible reasons why TNTP might not use the "actual" rates for this graph (i.e., two reasons I might be wrong in assuming they do): First, they have the option but think the self-reported rates are a better measure (I doubt they think this, given the limitations); or, second, because they are not able to link their teacher-level dataset to school-level data (e.g., poverty) from the district (this would occur if DCPS, perhaps for privacy reasons, doesn't provide teachers' schools with their IMPACT ratings). For the record, it's entirely possible that I'm missing something in the report that explains all this.

*** TNTP correctly acknowledges that further examination is needed to confirm these findings. I think it is worth doing. One preliminary next step along these lines (besides using the full sample for all breakdowns) would be to examine further why this pattern only seems to show up in the very lowest-poverty schools – i.e., why there isn’t a steadier pattern across the other four quintiles. For example, this may be because the poverty rates of schools in the other four quintiles don’t vary that much, but there’s a large drop-off once you move into the lowest-poverty quintile. Or it may be possible that there are other factors – ones that are associated with poverty or school or neighborhood dysfunction – that are behind these discrepancies.