The Smoke And The Fire From Evaluations Of Teach For America

A recent study by the always reliable research organization Mathematica takes a look at the characteristics and test-based effectiveness of Teach For America (TFA) teachers who were recruited as part of a $50 million federal “Investing in Innovation” grant, which is supporting a substantial scale-up of TFA’s presence in U.S. public schools.

The results of this study pertain to a small group of recruits (and comparison non-TFA teachers) from the first two years of the program – i.e., a sample of 156 PK-5 teachers (66 TFA and 90 non-TFA) in 36 schools spread throughout 10 states. What distinguishes the analysis methodologically is that it exploits the random assignment of students to teachers in these schools, which ensures that any measured differences between TFA and comparison teachers are not due to unobserved differences in the students they are assigned to teach.

The Mathematica researchers found, in short, that the estimated differences in the impact of TFA and comparison teachers on math and reading scores across all grades were modest in magnitude and not statistically discernible at any conventional level. There were, however, meaningful positive estimated differences in the earliest grades (PK-2), though they were only statistically significant in reading, and the coefficient in reading for grades 3-5 was negative (and not significant). Let’s take a quick look at these and other findings from this report and what they might mean.

Learning From Teach For America

There is a small but growing body of evidence about the (usually test-based) effectiveness of teachers from Teach for America (TFA), an extremely selective program that trains and places new teachers in mostly higher needs schools and districts. Rather than review this literature paper-by-paper, which has already been done by others (see here and here), I’ll just give you the super-short summary of the higher-quality analyses, and quickly discuss what I think it means.*

The evidence on TFA teachers focuses mostly on comparing their effect on test score growth vis-à-vis other groups of teachers who entered the profession via traditional certification (or through other alternative routes). This is no easy task, and the findings do vary quite a bit by study, as well as by the group to which TFA corps members are compared (e.g., new or more experienced teachers). One can quibble endlessly over the methodological details (and I’m all for that), and this area is still underdeveloped, but a fair summary of these papers is that TFA teachers are no more or less effective than comparable peers in terms of reading tests, and sometimes but not always more effective in math (the differences, whether positive or negative, tend to be small and/or only surface after 2-3 years). Overall, the evidence thus far suggests that TFA teachers perform comparably, at least in terms of test-based outcomes.

Somewhat in contrast with these findings, TFA has been the subject of both intensive criticism and fawning praise. I don’t want to engage this debate directly, except to say that there has to be some middle ground on which a program that brings talented young people into the field of education is not such a divisive issue. I do, however, want to make a wider point specifically about the evidence on TFA teachers – what it might suggest about the current focus to “attract the best people” to the profession.