Surveying The Teacher Opinion Landscape

I’m a big fan of surveys of teachers’ opinions of education policy, not only because of educators' valuable policy-relevant knowledge, but also because their views are sometimes misrepresented or disregarded in our public discourse.

For instance, the diverse set of ideas that might be loosely characterized as “market-based reform” faces a bit of tension when it comes to teacher support. Without question, some teachers support the more controversial market-based policy ideas, such as pay and evaluations based substantially on test scores, but most do not. The relatively low levels of teacher endorsement don’t necessarily mean these ideas are “bad," and much of the disagreement is less about the desirability of general policies (e.g., new teacher evaluations) than the specifics (e.g., the measures that comprise those evaluations). In any case, it's a somewhat awkward juxtaposition: A focus on “respecting and elevating the teaching profession” by means of policies that most teachers do not like.

Sometimes (albeit too infrequently) this tension is discussed meaningfully, other times it is obscured - e.g., by attempts to portray teachers' disagreement as "union opposition." But, as mentioned above, teachers are not a monolith and their opinions can and do change (see here). This is, in my view, a situation always worth monitoring, so I thought I’d take a look at a recent report from the organization Teach Plus, which presents data from a survey that they collected themselves.

The primary conclusion of this report is that there is a split between younger and more experienced teachers in terms of their policy views, with the two groups defined in this analysis as teachers with 10 or fewer years, versus those with 11+ years on the job (Teach Plus calls the former "The New Majority," based on a recent paper on trends in teacher characteristics). In general, the report concludes, the younger group is more supportive of policies such as performance-based pay, new evaluations that incorporate “student growth” and 401K or defined contribution pension plans. In contrast, there is more “inter-generational” agreement on other policies, such as collaboration time, class size and extended time. The authors put forth a bit of discussion regarding what these results mean for teacher retention and other outcomes.

For the record, I do not mean to take pot shots at Teach Plus’ work. They are not a professional polling organization (nor am I a professional pollster), and I applaud their efforts to listen to teachers via surveys. Furthermore, while I might take issue with some of their interpretations, the narrative is not blatantly skewed toward a specific perspective. But I think this report illustrates a few key issues, the most important of which are not at all specific to Teach Plus.

First, with regard to this particular report, this is not a scientific survey, a crucial fact that is not even mentioned once within the body of the report. Teach Plus collected surveys from roughly 1,000 respondents. This is no easy task no matter how it's done, but the survey was conducted online, distributed via “social media sites and education organizations." The respondents may therefore be different from the typical U.S. teacher in terms of their views, as may be the relationship between opinions and experience. This is especially salient given that Teach Plus is an advocacy group, and thus their supporters and followers are likely overrepresented in the survey.

Non-random surveys can be useful, but they always require very careful interpretation, and, if they’re to be used to draw conclusions about teachers in general, one must carry out a series of diagnostic checks to determine whether the sample’s measurable characteristics match the population (see this well-done recent Education Sector report for an example of a random survey).

Teach Plus' discussion of their sample is limited to an appendix, in which they almost seem to imply that it is more or less valid because the percentage of teachers with 10 or fewer years of experience is similar to the U.S. average (at least what it was in 2007-08, the last available year of the Schools and Staffing Survey, which is among the only national surveys of U.S. teachers).

Even taken at face value, this is painfully insufficient. Making things worse, the limited information provided in the Teach Plus report actually suggests the need for serious caution about the sample (see the first footnote, below).*

On a less important note, I may be missing something, but I was unable to find a complete set of tabulations and wordings. This may sound nitpicky, but it really does make it more difficult to interpret the rather limited, highly aggregate set of results presented in the report.**

If you go to the trouble of collecting survey data, you might as well as present all of it, even if its done in a supplemental document.

But my two most important points are not criticisms per se, and they are not at all unique to this particular survey. The first is a suggestion about question wording. The Teach Plus report finds that their less experienced respondents (1-10 years) are more receptive than their veteran counterparts to the idea that “student growth should be part of teacher evaluations." This wording, though common, is not as helpful as it could be. "Student growth” means different things to different teachers.

Some teachers may associate "growth" with standardized test-based measures (e.g., value-added), whereas others may see it differently (e.g., they may interpret it as growth based on other types of assessments). This is important not only because the choice of measures is a very contentious issue, one that states and districts are currently facing, but also because a significant proportion of teachers react favorably to using “growth” or “progress” in policies, yet this support drops to extremely low levels if you ask directly about standardized test scores. And these perceptions may vary by experience or other characteristics. So, I think it may be time for surveys to stop asking about “growth” or "progress," and instead be more specific (preferably querying views on different types of "progress" measures). This would be more helpful in the actual debate about evaluations, as well as other policies, such as performance pay.**

Second, and most generally, there’s another important (though perhaps obvious) distinction I'd like to point out, one that is sometimes obscured a bit by rhetoric about the “new generation of teachers." This is the difference between an age/experience “effect” and a cohort “effect” (in this context, "association" is a better word than "effect"). For example, it’s not surprising that, at any given time, younger, less experienced teachers are more receptive to ideas such as receiving additional salary instead of larger pension contributions (here's a related paper). By itself, that’s best characterized as an age/experience "effect."

A cohort "effect," on the other hand, would be if the new generation of teachers holds different views than preceding cohorts. In other words, are today’s less experienced teachers more supportive on issues such as pensions or seniority than less experienced teachers from previous years?

This matters because, put simply, there may be more aggregate support for some issues in 2012 in no small part because teachers have fewer years on the job, on average, today than in previous years. Put differently, there’s a meaningful difference between increasing or decreasing support levels due to demographic changes versus “real” shifts in attitudes, especially when those characteristics (e.g., experience) are not fixed. It's true that average experience is declining in recent years (and will most likely continue to do so for a while), and that increasing retention is most important during the first few years in the classroom. Nevertheless, we should be careful about drawing conclusions about changing attitudes among the "new generation" of teachers based solely on breakdowns by age or experience, rather than changes over time within these groups. At the very least, we should acknowledge the difference.

So, overall, I think it’s great when Teach Plus and other organizations go to the trouble of collecting teacher survey data and presenting it for public consumption. Even voluntary surveys can be useful if properly interpreted, and, again, I am more than receptive to the possibility that teachers’ attitudes toward issues like evaluation and compensation are evolving. But, if we’re going to listen to teachers when shaping policy – and we most certainly should do so – let’s make sure we’re doing it correctly.

- Matt Di Carlo

*****

* For instance, 20 percent of Teach Plus’ sample has 1-5 years of experience (5 percent 1-2 years, 15 percent 3-5 years). Nationally, however, in 2007-08, 19 percent of public school teachers had between 1-3 years of experience, which means that the proportion of U.S. teachers with 1-3 years in 2007-08 was roughly equivalent to the Teach Plus proportion with 1-5 years. These underlying distributions matter. Similarly, in the appendix of the report, we learn that 10 percent of the Teach Plus sample is comprised of charter school teachers. Nationally, in 2007-08, the proportion was roughly two percent. Given rapid charter school proliferation, this is almost certainly higher now, but it’s doubtful that it’s anywhere near 10 percent. And, once again, we would really need a bunch of other variables to evaluate the sample.

** For example, for many of the questions, respondents were asked to choose from one of five categories ranging from “very important” to “not at all important” (the actual label for the latter category is not specified, so that’s my guess). But none of the results in the report break down responses by category. They either present the responses as averages from the 1-5 scale (not a great practice for this type of variable), or as “percent who agree/disagree” dichotomies. Neither permits the reader to distinguish between different levels of agreement/ disagreement. Similarly, there are no breakdowns of attitudes by experience that don’t rely on the “10 or fewer years/11+ years” dichotomy. Although estimates for smaller subsamples will be more imprecise, variation in views within these groups is very important. For instance, given that the narrative is primarily focused on identifying implications for teacher retention, estimates for teachers with 1-3 (or 1-5) years on the job would seem to be the most pertinent.

*** In contrast, to Teach Plus’ credit, they do ask a question about the proper weighting of the “growth” component – specifically, whether it should be 20 percent or higher. I’d personally like to see more surveys ask this important question.
Permalink

I'm glad you wrote about this Matt, because this paper rubbed me the wrong way, but I couldn't quite articulate why. The "cohort" vs. "age/experience" idea makes the point well: there's a difference between a generation gap and a generational gap.

The only thing I'd add is that the folks who designed the survey only cared (as far as I can tell) to break down their respondents' characteristics by experience. Naturally, any difference (or similarity) between the two groups will be highlighted, because the teachers haven't been broken down into other groupings. But what if the differences there were greater?

For example, what if we broke down teachers by work assignment? By school population demographics? What if less-experinced teachers are more prone to be assigned out of their field of expertise/choice and in schools with larger populations of minority and/or low-SES students? Couldn't that affect the responses?

It's clear no one cared to explore that here. The fact that the only difference Teach Plus cared about was "vets" vs. "newbies" speaks volumes.

Permalink

JJ,

Correct - as noted in the second footnote, it's by experience only.

It's true, as you point out, that the "experience effect" might be confounded by other factors, such as school type. But I'm not sure there were ulterior motives here. For example, Teach Plus may have simply wanted to focus on the experience aspect because of the significant change in the distribution over recent years. That said, it would have been very helpful for them to provide a full set of results, including breakdowns by other characteristics.

Thanks,
MD

Permalink

Matt,

A very thoughtful blog post, as always. You put such care into these things!

Overall, fair point. I do have a question.

Your critique "The respondents may therefore be different from the typical U.S. teacher"....does this also apply to teachers unions, and those who choose to vote in those elections (analogous to those who choose to respond to a survey)?

I.e., would you suggest the president of a state union typically issue a disclaimer along the lines of "Only 15% of our members voted in the last election, so my views as president may be different from the typical teacher?"

Permalink

MG,

Yes, it's quite possible, if not likely, that teachers who vote in union elections are different from the typical teacher in their particular bargaining unit (I've never really looked to see if anyone has done a systematic analysis). Similarly, the average voter in U.S. presidential and midterm elections is different from the average eligible voter.

I don't have an opinion on whether or not union presidents - or any elected officials - should issue the same type of disclaimer as a non-random survey. Strikes me as a very different context.

MD