On Education Polls And Confirmation Bias

Our guest author today is Morgan Polikoff, Assistant Professor in the Rossier School of Education at the University of Southern California. 

A few weeks back, education policy wonks were hit with a set of opinion polls about education policy. The two most divergent of these polls were the Phi Delta Kappan/Gallup poll and the Associated Press/NORC poll.

This week a California poll conducted by Policy Analysis for California Education (PACE) and the USC Rossier School of Education (where I am an assistant professor) was released. The PACE/USC Rossier poll addresses many of the same issues as those from the PDK and AP, and I believe the three polls together can provide some valuable lessons about the education reform debate, the interpretation of poll results, and the state of popular opinion about key policy issues.

In general, the results as a whole indicate that parents and the public hold rather nuanced views on testing and evaluation.

In contrast, advocates’ responses to the first two polls were predictable. The PDK poll was widely seen as more favorable to what I’ll call the “reform skeptics," and rightfully so. The two most important data points the skeptics used to support their headlines were as follows.

First, PDK asked “Over the last decade, there has been a significant increase in testing in the public schools to measure academic achievement. Just your impression or what you may have heard or read, has increased testing helped, hurt, or made no difference in the performance of the local public schools?" 22 percent responded “helped”, 36 percent responded “hurt," and 41 percent responded “made no difference."

Second, PDK asked “Some states require that teacher evaluations include how well a teacher’s students perform on standardized tests. Do you favor or oppose this requirement?" 41 percent favored and 58 percent opposed this requirement.

At the same time as reform skeptics pumped up the results from the PDK poll, they trashed the AP poll. These critiques of the AP poll generally accepted the AP results that agreed with the skeptics’ agenda (e.g., parents think education is better now than what they received) and attacked the results that didn’t (e.g., that a majority of parents support paying teachers more based in part on test results).

The purpose of this post is not to harp on reform skeptics. Indeed, these kinds of interpretations were also made by “reformers” seeking to bolster their own positions. Rather, the point is to use them as illustrations of a broader set of problems in education reform and advocacy. To illustrate these problems, let’s take a look at the aforementioned PACE/USC Rossier poll, released earlier this week. First, though, I would note that I am not delving deeply into issues of question wording, though I suspect there is much to be learned there. (In the interest of disclosure, I am much more dubious of the PDK wording than I am of the wording on either AP/NORC or PACE/USC Rossier.)

The first problem is in the interpretation of results from multiple sources. It is often the case that different research reports will produce different findings. This is true in polling, as well. When we are making judgments about the “true” effect of an intervention or the “true” feelings of the populace, it only makes sense to consider all of the information we have in front of us, rather than cherry picking the information that matches our views and ignoring that which doesn’t. It is unfortunately all too common that those of us in education policy latch on to those results that most comport with our agendas or preconceptions.

Instead, we should really use a Nate Silver approach to public opinion, putting all of the data into the hopper and seeing what the averages and the trends tell us. In the case of standardized testing, for instance, the PDK poll suggested that few parents (22 percent) thought testing had made local schools better, noting a substantial drop from the last time the question was asked. In contrast, the AP/NORC poll said almost no parents (11 percent) think their children get tested too much or (6 percent) think testing their children regularly is unimportant.

The PACE/USC Rossier poll adds evidence here, with 66 percent of parents saying California should test students in each grade to measure progress, and 88 percent saying high school testing should either stay the same or expand to additional subjects. On average, then, it appears parents support testing for monitoring student progress. On the other hand, a look at the trendlines suggests there is more dissatisfaction with testing than there was a few years ago.

The second, and related problem is one of confirmation bias associated with epistemic closure. The cherry picking of poll results in the ways I have just described can have some serious negative consequences for the groups doing the cherry picking. Here, the prime example is the 2012 presidential campaign of Mitt Romney, which bought into the “unskewed” polls promoted by those on the right all the way up until the election night defeat. In that case, ignoring the data that ran contrary to preconceptions precluded an understanding of actual public opinion. In turn, this inhibited the ability of the campaign to be agile and move the needle in the last few weeks of the campaign.

How would this play out in education? Again, the poll results can provide a good illustration. In the case of teacher evaluation, skeptics seized on the PDK finding that 58 percent of parents opposed a state requirement that teacher evaluations include how well a teacher’s students perform on standardized tests. Some interpreted this finding to mean that parents do not value standardized tests and support teachers unconditionally. This is a mistake, as it ignores the very real evidence that parents want bad teachers out of the classroom and believe test results can play a role (even a small one) in identifying those bad teachers.

As an example of this evidence, we can turn to the AP/NORC and PACE/USC Rossier polls. The AP/NORC found that 80 percent of parents somewhat or strongly favored making it easier for districts to fire teachers for poor performance. Furthermore, AP/NORC found that 79 percent thought changes in test scores over time should contribute a moderate amount or a great deal to teachers’ evaluations.  Similarly, the PACE/USC Rossier data support the conclusion that parents think student test scores matter for evaluating teachers, with 81 percent of parents saying evaluations for punishment or reward should include at least some standardized test results and 43 percent saying removing bad teachers from the classroom would have the most positive impact on school performance (more than any other option).

The third and final problem (or perhaps it is not a problem at all) is that the results from this set of polls do not fit neatly into any particular ideological boxes. My personal summary of the results from these polls goes something like this. Parents believe teachers are exceedingly important – depending on the particular poll, teachers may be seen as more important than anything else when it comes to schools’ impacts on students. When they’re choosing an ideal teacher for their child, however, parents do not think about test scores first. And when a teacher fails to perform, their first response is to provide more support. But there is no denying that parents do view test scores as important for both their kids and other people’s kids, and that “bad” or “poor performing” teachers should not be in the classroom.

Taken together, these results suggest that parents will not be receptive to a slash-and-burn approach to teacher evaluation or school accountability. However, the results quite clearly indicate that there is a very real bottom line when it comes to teacher performance – everyone knows there are some bad teachers out there who probably need to go – and folks who ignore that bottom line do so at their own peril. In particular, sticking one’s fingers in one’s ears and crying “corporate reform!" may leave groups out of the policy discussions where their voices might be most useful.

- Morgan Polikoff


I think you've skipped by a critical issue in trying to interpret these polls and role of confirmation bias.

These are polls of the public (e.g. parents), who are being asked questions that they might or might not be appropriately knowledgable to be able to answer well, or even consistently.

You seem to be taking at face value the legitimacy of these answers, without questioning the appropriateness or stability of the answers.

It is entirely possible that the answers are at least a little bit inconsistent because the wording offers different clues to non-experts about the facts about which their opinions are being solicited.

Is confirmation bias a real issue in the evalaution of evidence? Of course! Should we be careful when listening to people cite parts of polls? No question. But these particular polls might or might not be valid data in the first place.

There are legitimate questions about values and priorities which should not be limited to experts' answers. These are different than the technical question which really do require expertise in order to answer validly. But even those value/priority issues are informed by the respondents' beliefs about the technical issues. Thus, the information available to respondents (including in the queston wording) must always be considered.