Learning From Teach For America
There is a small but growing body of evidence about the (usually test-based) effectiveness of teachers from Teach for America (TFA), an extremely selective program that trains and places new teachers in mostly higher needs schools and districts. Rather than review this literature paper-by-paper, which has already been done by others (see here and here), I’ll just give you the super-short summary of the higher-quality analyses, and quickly discuss what I think it means.*
The evidence on TFA teachers focuses mostly on comparing their effect on test score growth vis-à-vis other groups of teachers who entered the profession via traditional certification (or through other alternative routes). This is no easy task, and the findings do vary quite a bit by study, as well as by the group to which TFA corps members are compared (e.g., new or more experienced teachers). One can quibble endlessly over the methodological details (and I’m all for that), and this area is still underdeveloped, but a fair summary of these papers is that TFA teachers are no more or less effective than comparable peers in terms of reading tests, and sometimes but not always more effective in math (the differences, whether positive or negative, tend to be small and/or only surface after 2-3 years). Overall, the evidence thus far suggests that TFA teachers perform comparably, at least in terms of test-based outcomes.
Somewhat in contrast with these findings, TFA has been the subject of both intensive criticism and fawning praise. I don’t want to engage this debate directly, except to say that there has to be some middle ground on which a program that brings talented young people into the field of education is not such a divisive issue. I do, however, want to make a wider point specifically about the evidence on TFA teachers – what it might suggest about the current focus to “attract the best people” to the profession.
This goal – recruiting and retaining talented people into teaching – is shared by most everyone, but it is among the most central emphases of the diverse group that might be called market-based reformers. Their idea is to change compensation structures, performance evaluations and other systems in order to create the kind of environment that will be appealing to high-achieving, less risk-averse people, as well as to ensure that those who aren't cut out for the job are compelled to leave. This will, so the argument goes, create a “dynamic profession” more in line with the high risk, high reward model common among the private sector firms competing for the same pool of young workers.
No matter your feelings on TFA, it’s more than fair to say that their corps members fit this profile perfectly. On paper, they aren’t just "top third," but top third of the top third. TFA cohorts enter the labor market having been among the highest achievers in the best colleges and universities in the nation. Getting accepted to the program is very, very difficult. Those who make it are not only service-oriented, but also smart, hard-working and ambitious. They are exactly the kind of worker that employers crave, and market-based reformers have made it among their central purposes to attract to the profession.
Yet, at least by the standard of test-based productivity, TFA teachers really don’t do better, on average, than their peers, and when there are demonstrated differences, they are often relatively small and concentrated in math (the latter, by the way, might suggest the role of unobserved differences in content knowledge). Now, again, there is some variation in the findings, and the number and scope of these analyses are limited – we’re nowhere near some kind of research consensus on these comparisons of test-based productivity, to say nothing of other sorts of student outcomes.
(It’s also very important to note that, for all we know, TFA teachers would get better results with more extensive preparation. After all, even the most well-designed five-week training regimen would have trouble preparing teachers for placement in some of the highest-needs schools and districts in the nation.)
Still, even these admirable young people, who could probably have their choice of jobs outside education, end up being just hard-working teachers, struggling to manage classrooms, plan lessons and get as much learning as possible from their students, often under less-than-ideal conditions. This squares with the related literature showing that the majority of measurable pre-service characteristics, such as the selectivity of undergraduate institution, GPA, etc., are, at best, inconsistently predictive of future classroom performance (at least as measured by growth model estimates). The variation within these groups completely overwhelms the variation between them.
This is one of the reasons why, whenever I hear someone talk about the need to “attract the best people” to teaching, I wonder as to their conceptualization of the “best people." In most cases, they’re talking about the kind of folks that come through TFA.
And I’m all for getting these people into teaching - we should have as many of them in classrooms as we can. Sure, in TFA's case, they commit to only a couple of years and most do leave, but some do stay beyond that commitment, and it's worth noting that attrition and mobility are also extremely high among traditionally-certified teachers who, like TFA'ers, work in high-needs schools and districts. Moreover, it's not irrelevant that many of the TFA teachers who leave the classroom pursue leadership positions in education (positions which many teachers believe require classroom experience).
But, to me, one of the big, underdiscussed lessons of TFA is less about the program itself than what the test-based empirical research on its corps members suggests about the larger issue of teacher recruitment. Namely, it indicates that "talent" as typically gauged in the private sector may not make much of a difference in the classroom, at least not by itself. This doesn't necessarily mean that market-based policies won't lure great teachers, but it does suggest that, if we’re going to enact massive changes in personnel policy to attract a certain “type” of person to teaching, we might reexamine our assumptions on who we’re trying to attract and what they want.
- Matt Di Carlo
* Probably the most rigorous study in this area, albeit one that is not easily generalized, is this Mathematica evaluation (later published in a peer-reviewed journal), which exploits random assignment of students to classrooms. A few high-quality examples of quasi-experimental treatments include: this published analysis of New York City teachers; this paper, which was also subsequently published and also used NYC data; and this recent working paper comparing alternatively- and traditionally-certified teachers in Florida. Finally, a 2009 working paper was the first to compare TFA teachers placed in high schools.