No Presentation Without Representation

I tend to comment on newly-released teacher surveys, primarily because I think the surveys are important and interesting, but also because teachers' opinions are sometimes misrepresented in our debate about education reform. So, last year, I wrote about a report by the advocacy organization Teach Plus, in which they presented results from a survey focused on identifying differences in attitudes by teacher experience (an important topic). One of my major comments was that the survey was "non-scientific" – it was voluntary, and distributed via social media, e-mail, etc. This means that the results cannot be used to draw strong conclusions about the population of teachers as a whole, since those who responded might be different from those that did not.

I also noted that, even if the sample was not representative, this did not preclude finding useful information in the results. That is, my primary criticism was that the authors did not even mention the issue, or make an effort to compare the characteristics of their survey respondents with those of teachers in general (which can give a sense of the differences between the sample and the population).

Well, they have just issued a new report, which also presents the results of a teacher survey, this time focused on teachers’ attitudes toward the evaluation system used in Memphis, Tennessee (called the “Teacher Effectiveness Measure," or TEM). In this case, not only do they raise the issue of representativeness, but they also present a little bit of data comparing their respondents to the population (i.e., all Memphis teachers who were evaluated under TEM).

Collecting survey data is difficult work regardless of the approach, as is producing reports that present the results. Sometimes, important details fall through the cracks. Teach Plus deserves credit for stepping up their game.

In addition, the information they provide is important. Their sample, in fact, does not appear to be representative of the teachers evaluated under the TEM system, at least according to the distribution of evaluation results (which was the only variable presented). Specifically, the teachers who responded received higher ratings than the population as a whole – 78 percent of respondents received one of the top two ratings, compared with 60 percent of all teachers. The differences are not huge, but they are large enough to invoke caution.

The report notes: “While this is not a representative sample, it provides significant insight into the group that is arguably of most interest, those with TEM 4 and 5 ratings who are having the greatest impact on student learning."

This is definitely true: That the opinions of the sample (over 1,000 teachers) can be useful and provide insights (though the fact that teachers with TEM 4 and 5 ratings are overrepresented does not quite mean that the survey is any more or less valid for examining the attitudes of this group versus the others).

On the other hand, of course, the disproportionate presence of higher rated teachers does carry implications for such interpretation, particularly in the case of certain of questions. For example, one of the findings featured in the report is that the majority of surveyed teachers (58 percent) were either “somewhat confident” (38 percent), “quite confident” (17 percent) or “extremely confident” (3 percent) that “the teaching practices exemplified in the TEM rubric will lead to increased student achievement."

This may sound cynical to some, but it is very plausible that higher rated teachers are more likely to endorse the validity of the TEM rubric, and thus the fact that higher rated teachers are overrepresented may have influenced this particular result. On the other hand, it is sometimes the case that respondents with strong negative feelings are compelled to complete surveys querying attitudes on that topic, which can have the opposite effect on the results for questions like this one about the validity of TEM.

It's very tough to assess these possibilities without a scientific survey, which is costly and difficult, but they are certainly important to keep in mind when viewing the results. And, by presenting data and guidance regarding the characteristics of respondents versus the population, Teach Plus gives their readers some of the information necessary for proper interpretation.

- Matt Di Carlo


You hit on a huge problem that I think would make a great topic for a full post. You state, "It’s very tough to assess these possibilities without a scientific survey, which is costly and difficult." You are correct--although I would argue its not prohibitively costly or difficult--and that's fine if the study's results are used for information purposes only. The real problem is when survey results that are not from high-quality studies are used as support of policy recommendations that otherwise have no empirical support. The NCTQ ratings are a prime example. NCTQ claims it cannot afford to do a good study, they can only afford to do what they did. That's fine--we have all been in that situation. But knowing that, the funder and researcher have a moral and ethical duty to caution about how the results should be used. This is a HUGE problem in today's climate, with every shoddy "study" being used to support some per-ordained policy decision. Is it any wonder we end up with poor policy decisions?