If Newspapers Are Going To Publish Teachers' Value-Added Scores, They Need To Publish Error Margins Too

It seems as though New York City newspapers are going to receive the value-added scores of the city’s public school teachers, and publish them in an online database, as was the case in Los Angeles.*

In my opinion, the publication will not only serve no useful purpose educationally, but it is also a grossly unfair infringement on the privacy of teachers. I have also argued previously that putting the estimates online may serve to bias future results by exacerbating the non-random assignment of students to teachers (parents requesting [or not requesting] specific teachers based on published ratings), though it's worth noting that the city is now using a different model.

That said, I don’t think there’s any way to avoid publication, given that about a dozen newspapers will receive the data, and it’s unlikely that every one of them will decline to do so. So, in addition to expressing my firm opposition, I would offer what I consider to be an absolutely necessary suggestion: If newspapers are going to publish the estimates, they need to publish the error margins too.

Value-added and other growth model scores are statistical estimates, and must be interpreted as such. Imagine that a political poll found that a politician’s approval rate was 40 percent, but, due to an unusually small sample of respondents, the error margin on this estimate was plus or minus 20 percentage points. Based on these results, the approval rate might actually be abysmal (20 percent), or it might be pretty good (60 percent). Should a newspaper publish the 40 percent result without mentioning that level of imprecision? Of course not. In fact, they should refuse to publish the result at all.

Value-added estimates are no different. Classes are small, and the estimates for some teachers are based on only one or two years worth of data. The performance of just a few outlier students can dramatically effect the estimate for a single teacher.  In other words, in many cases, samples are too small to produce estimates that are even remotely reliable (there is also measurement error in the tests themselves). Moreover, even for teachers who have more years of data, the imprecision in their estimates is often large as well.

We can actually illustrate this using real data from New York City, where the average margin of error in value-added scores in 2007-08 (one of the years that will be released) was plus or minus roughly 30 percentile points. That means, for example, that a teacher scoring at the 40th percentile might actually be anywhere between the 10th and 70th percentile. Granted, this teacher is more likely to be 40th than 70th or 10th, but it's all a matter of degree.

The estimate for this teacher does not even allow us to have any real confidence that he or she is above or below the median, and this will probably be the case for the majority of teachers in the city. Some will have smaller error margins than the average, some larger, but that’s precisely why they’re so important. Without this information, the estimates simply cannot be interpreted properly, and can be extremely misleading. Not only should the city’s newspapers report the error margins, they should be featured prominently.

If newspapers do otherwise, the estimates are certain to be misinterpreted. They would be violating not only (in my opinion) principles of fairness, but the most basic standards for accuracy as well.

- Matt Di Carlo

*****

* The actual information that newspapers will receive are the city’s “teacher data reports” (here’s a sample of one), which do report error margins, meaning the papers will have the information. It’s possible that the papers will allow readers to download the reports themselves, but, given how many teachers there are in the city, it’s more likely that they will reproduce the data in a more concise, user-friendly format.