Skip to:

Teacher Evaluations And Turnover In Houston

We are now entering a time period in which we might start to see a lot of studies released about the impact of new teacher evaluations. This incredibly rapid policy shift, perhaps the centerpiece of the Obama Administration’s education efforts, was sold based on illustrations of the importance of teacher quality.

The basic argument was that teacher effectiveness is perhaps the most important factor under schools’ control, and the best way to improve that effectiveness was to identify and remove ineffective teachers via new teacher evaluations. Without question, there was a logic to this approach, but dismissing or compelling the exits of low performing teachers does not occur in a vacuum. Even if a given policy causes more low performers to exit, the effects of this shift can be attenuated by turnover among higher performers, not to mention other important factors, such as the quality of applicants (Adnot et al. 2016).

A new NBER working paper by Julie Berry Cullen, Cory Koedel, and Eric Parsons, addresses this dynamic directly by looking at the impact on turnover of a new evaluation system in Houston, Texas. It is an important piece of early evidence on one new evaluation system, but the results also speak more broadly to how these systems work.

The Houston policy is a very interesting context. Teachers in Houston do not have tenure, and most work under one year contracts. And Houston’s evaluation system, unlike many of its counterparts elsewhere, places a lot more control in the hands of principals. So, the impact of this policy is in many respects that of providing principals with more information about their teachers’ performance, and the ability to act on it (also see Rockoff et al. 2012).

Cullen et al. focus on the relationship between teacher turnover and performance before and after the implementation of the new system in Houston (called the Effective Teachers Initiative, or ETI). Put differently, the focus is on the change in the composition of teachers who exit, pre- and post-ETI.

Prior to ETI, there was a negative relationship between teacher effectiveness and exits – i.e., less effective teachers were more likely to exit than their more effective colleagues, with effectiveness here defined in terms of validated measures of teachers’ ability to raise students’ test scores (in part because the original value-added scores, unlike the other components of the system, are available both before and after the new evaluations were implemented).

The big finding of Cullen et al. is that the relationship was stronger after the onset of the new evaluation system, with the estimated effects concentrated among low-performing teachers in schools serving low-performing students, who were more likely to exit the district than they were before ETI.

On the one hand, this suggests that the new evaluations worked as intended. Under a system in which principals were armed with better information about their teachers’ performance (full evaluation results instead of single year value-added scores), teachers who were less effective in raising test scores were more likely to exit the district (or be dismissed) post-ETI than they were prior to ETI, particularly in schools serving lower performing students. On the other hand, all exits increased under the new evaluations -- including among teachers who were rated as average and high performers. The extent to which this spike is attributable to the new evaluation system per se is unclear, but it served to “dilute” the impact on student achievement of the increase in exits among low performers. There is also some indication that higher-rated teachers were more likely to switch out of schools with low-performing students after ETI (versus before the policy), which would also attenuate the impact of the policy.

The upshot here is that Houston’s new teacher evaluation system did seem to boost differential attrition productively, but the magnitude of this increase, along with countervailing forces, was insufficient to have a meaningful effect on student achievement.

It bears emphasizing that this is not a "comprehensive" evaluation of Houston’s ETI program. It focuses on just one of the policy’s potential effects – differential attrition. It is quite possible, for example, that the evaluation has led to improvement among retained teachers. In addition, this analysis includes only teachers in tested grades and subjects.

That said, these results illustrate the fact, put crudely, that teacher labor markets have a lot of working parts. Even if a given policy is effective in compelling more lower-performing teachers to leave, this shift can be nullified by a concurrent increase in exits (or transfers) among higher-performing teachers (just as it could be enhanced by greater retention of high performers). On a related note, the effect of any exits relies on the quality of the labor supply. Even a policy that is enormously effective in compelling the exit of low performing teachers will have a muted impact if they are replaced with low performing candidates (the impressive labor supply in D.C. was indeed a big factor in a similar analysis by Adnot et al. [2016]).

These arguments are hardly new or original, but they did not always play a particularly prominent role during the heated debate about evaluation reform.

In any case, this study by Cullen, Koedel, and Parsons, like most good policy analysis, illustrates the promise of new evaluations, but also the challenges. The (voluntary or involuntary) exit of low rated teachers is clearly important, and the early evidence, including this study, suggests that new evaluation systems can compel such exits. But a lot of other things have to fall into place as well. There are interventions that might help (e.g., transfer incentives, retention and recruitment programs, etc.), but, at this point, it is prudent to acknowledge that, despite the impatience (and promises) of some policymakers and advocates, the use of teacher evaluations to shift the distribution of teacher quality is still in its earlier phases, and we have a lot to learn.

Issues Areas

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.