A new Mathematica report examines the test-based impact of The Equity Project (TEP), a New York City charter school serving grades 5-8. TEP opened up for the 2009-10 school year, receiving national attention mostly due to one unusual policy: They paid teachers $125,000 per year, regardless of experience and education, in addition to annual bonuses (up to $25,000) for returning teachers. TEP largely makes up for these unusually high salary costs by minimizing the number of administrators and maintaining larger class sizes.
As is typical of Mathematica, the TEP analysis is thorough and well-done. The school's students' performance is compared to that of similar peers with a comparable probability of enrolling in TEP, as identified with propensity scores. In general, the study’s results were quite positive. Although there were statistically discernible negative impacts of attendance for TEP’s first cohort of students during their first two years, the cumulative estimated test-based impact was significant, positive and educationally meaningful after three and four years of attendance. As always, the estimated effect was stronger in math than in reading (estimated effect sizes for the former were very large in magnitude). The Mathematica researchers also present analyses on student attrition, which did not appear to bias the estimates substantially, and they also show that their primary results are robust when using alternative specifications (e.g., different matching techniques, score transformations, etc.).
Now we get to the tricky questions about these results: What caused them and what can be learned as a result? That’s the big issue with charter analyses in general (and with research on many other interventions): One can almost never separate the “why” from the “what” with any degree of confidence. And TEP, with its "flagship policy" of high teacher salaries, which might appeal to all "sides" in the education policy debate, provides an interesting example in this respect.