Skip to:

Teacher Surveys And Standardized Tests: Different Data, Similar Warning Labels

Comments

Of course some of the objections are true for both sets of data. But there are particular issues with the use of standardized test that do not apply here. 1) This might be our only source of data on teacher attitudes. We already have grades and graduations rates to look at student/school success. 2) We know that standardized test do NOT actually measure the whole construct they claim to measure. I don't mean they sample from it, it mean that they do not even try to measure some parts of the curriculum or standards to which they claim to be aligned. 3) The kinds of standardized tests we are willing to use are incapable of measuring the higher order thinking/21st century skills/non-cognitive skill/habits of mind that so many (most? all?) people think are the most important lessons of schooling. These things are hardly even in the standards/curriculum -- though CCSS brings some of them in -- so, they assessment developers aren't even supposed to assess them. *************** The fact is that we do not treat these kinds of data the same. When looking at the teacher survey we focus on particular questions and dive into what they mean -- some paying more attention to the actual wording than others. But we don't do that for standardized tests. In fact, for test security reasons we often do not know what the questions are. With multiple forms, students don't all get the same questions. etc. etc. And so, I do not think acceptance or caution of the teacher survey data should match acceptance or caution of testing results as closely as you say (imply?). Yes, there are some statistical concerns they should have in common, but those are the merely the simplest concerns. (Of course, there are concerns about the surveys that do not apply to the tests. The trickiness of wording, something you point to quite a bit. The focus on just one item, when good research surveys build a composite score of out multiple items (which exacerbates the wording thing). Etc..)

Thank you for an important point about testing. Looking at the actual survey report you are citing (thanks for the link), I don't agree with your statement that the current question wording "precludes straight comparisons of responses to the teacher job satisfaction question with those of the surveys conducted in 2009 or earlier. " The chart on p. 45 notes that the current wording was used in the original 1984 survey and in 1986, 1987, 2001, 2011, 2012 . So direct comparisons for those years is possible, and the data show a clear decline in recent years, 13 percentage points since 2001 and 5 points in the last year to a 25-year low. I think you also make the point that two people can look at the same data and see different things.

edwatcher, You are absolutely correct. A careless error on my part. Thank you for pointing it out. MD

NOTE TO READERS: The original version of this post stated that the change in wording of the teacher satisfaction question "precludes straight comparisons of responses to the teacher job satisfaction question with those of the surveys conducted in 2009 or earlier." This was incorrect. The current wording was also used in 1984, 1986, 1987, 2001 and 2011. The post has been corrected. I apologize for the error, and thank you to edwatcher, the astute commenter who noticed it. MD

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.