Skip to:

The Year In Research On Market-Based Education Reform: 2012 Edition

Comments

Thanks for a very comprehensive, and unbiased update on the latest research. It is pretty rare to see reporting untarnished by agenda or pre-judgement. It is important to see that matters such as teacher performance pay are seen in the wider context of attracting and retaining teachers rather than just on the effect on pupil results. But it is yet another form of accountability, and teaching operates best in a relaxed atmosphere, free from scrutiny. But trusting teachers to do their jobs without imposing a tight reign on how they do this is seems an idealistic goal, even if Finland shows that it can be done, and to very good effect.

I am dismayed by this piece. You arealways polite. But, these studies modest findings have been twisted to support policies for attacking teachers, unions, and public schools. And you bent way too far over backwards to say nice things about them and ignored the ways that are being spun in order to beat down teachers and promote soul-killing bubble-in "accountability." Chetty et al, for instance, promotes policies that are an existiential threat to inner city teachers, and yet it excluded classes where 25% of students are on IEPs. Not having taught in the inner city, the economists (and perhaps you) do not seem to understand how important that is. Neither did you call them for ignoring qualitative evidence that sort DOES occur unofficially. Back in the day, (when i was in academia) that oversight would have been rejected by the entire scholarly community. In my field (history) economists who ignore standard steps for testing whether their models were linked to reality would have been ridiculed (did you hear the one about that econometric model that showed that slaves were treated well ...?) Besides, unofficial sorting should be obvious to anyone with actual experience in schools, so Chetty et al should have had the burden of proof, even if they didn't know that. But, why didn't you talk to some teachers about the way that students are sorted? Here's my take on the CREDO study. http://www.schoolsmatter.info/2012/12/the-hoover-institutes-amazing.html You might not think its funny, but I wish you had noted the extreme difference between the study's spin and the actual findings that are scattered through it.

Also, why didn't you mention this? http://www.washingtonpost.com/blogs/answer-sheet/wp/2012/12/23/the-fundamental-flaws-of-value-added-teacher-evaluation/ I'd be sincerely interested in your (and the other researchers') take on the Florida chart and the difference between teachers' projected and actual value-added. To theorists, the gap might be small. To practitioners, and for policy analysis it is huge. If every year, you see a colleague's career being ruined - simply due to imprecise models, even if the advocates of those models say that the imprecision is small to them, how long before you say "take this job and shove it." Worse, that's an average but value-added is most invalid for high-poverty schools and, probably, neighborhood secondary schools where peer effects are most negative. If that is the average inaccuracy, in the tough secondary schools, Florida is going to see an exodus of teaching talent to the schools where it is easier to raise test scores. And that reminds me, why did you not emphasize C. Kirabo Jackson's findings? The negative implications for value-added in that paper dwarf the policy implications of the pro-value-added papers. After all, they are already firing high school teachers before asking whether value-added can be made valid for them. And, that also raises the question of the clear political bias of ostensibly scholary research. You don't see a pattern to the way the pro-vam side spun their findings and downplayed their evidence that the scattered through their papers that argued against their predetermined preferences?

Ooops! I'm learning the danger of writing long comments in the Washington Post and then cutting and pasting them to here. When writing a long comment, with a link, the words bounce up and down, making it impossible for me to read what I wrote. Good thing I later read what I actually wrote before others read it and assumed I'm nuts. The chart was an EXAMPLE, not an average.I meant to ask whether gaps that seem small to a researcher would be seen differently if it was their career in jeopardy. But, the point is the same. The size of errors make a huge difference if you are talking reality, not theory. If a researcher gets to within 5%, for example, that's great. But, would you accept a 5% PER YEAR risk that your career would be damaged or destroyed by such an errors? What about 8%? or 15% PER YEAR? What are the policy implications of imposing value-added evaluations across the nation because inaccuracies can be as small as 5%, but when the model can also be about as accurate as a coin flip? THAT is the burden of proof that those researchers should have assumed.

NOTE TO READERS: The fourth paragraph of this post originally stated that Mathematica's report presented findings from a three-year evaluation of the the program. It was actually a four-year period. The post has been corrected. I apologize for the error. MD

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.