Skip to:

The 5-10 Percent Solution

Comments

The Huff Po report on VAMs http://www.huffingtonpost.com/2010/12/23/teacher-layoffs-seniority_n_800771.html and layoffs reported: "Dan Goldhaber, lead author of the study and the center's director, projected that student achievement after seniority-based layoffs would drop by an estimated 2.5 to 3.5 months of learning per student, when compared to laying off the least effective teachers." But the report says: "Teachers RIFed in our simulation are approximately 20% of a standard deviation in student performance less effective in student performance than teachers RIFed in reality." Goldhaber misquoted the Boyd et al study of 2010, but then he said his results were similar to their conclusion that The typical teacher who is laid off under a valueadded system is 26 percent of a standard deviation in student achievement less effective than the typical teacher laid off under the seniority-based policy. 7Boyd then says that the gap would shrink as the new teachers gained experience. Am I missing something or is this a huge bluff by Goldhaber et al? After all, the latest Gates MET study’s conclusions contradicted their findings. John

<i>Moreover, you can bet that many teachers, faced with the annual possibility of being fired based on test scores alone, would be even more likely to switch to higher-performing, lower-poverty schools (and/or schools that didn’t have the layoff policy)</i> But see the recent paper by Figlio/Sass and others showing that teacher value-added scores in Florida and NC aren't that different in high-poverty schools. If the teachers to whom you refer don't know the difference between a simplistic look at test score levels and a value-added system that takes poverty into account, perhaps it would be helpful if people who do know the difference didn't muddy the waters.

I think I understand now what the Goldhaber report says and what it means. I just can’t tell what they actually did. Yes, they ran a simulation and the teachers RIFed would be estimated to be less effective by 2.5 to 3.5 months. I did not understand that that was what they were doing for two reasons. Firstly, that would require them to run that simulation for each district and each subject matter. Had they done so, I figured, they would have said that was what they did. They have a convoluted footnote that might address that but I couldn’t figure out what it means, and I assumed that such an effort would be reported in the text. So, maybe they did that but did not mention it in prose. But, I wonder if they just ran a macro simulation for the entire state where the bottom 145 teachers were RIFed without regard for whether the replacement worked in that district or not. (Boyd didn’t have that to worry about, and speaking of that their typo threw me off also) That would be intellectually dishonest, but if they used a more complex alternative method of running the more complex simulation, would they have not said they had done so. Secondly, they described six different VAM scenarios, but they never said precisely which one they used. Had they chosen one method for the simulation, I thought, they would say which one they used. But they promised a VAM that took into account comparability of schools and districts. So, I read the article wondering if they had started with some generic VAMs for the simulation, and used six more refined models for their final tables. Rereading again, they said at one point that they incorporated parts of scenarios #2 and #3, and at another point #2 and #5. I didn’t understand #5, Regarding #2 they said its weakness was that it could not account for student, classroom, and district characteristics. But then they said it could be adjusted to reach those characteristics. But they didn’t whether they did that or not in the simulation. So when I read and reread the report, I concentrated on the Tables, which was the only place where they said what they were controlling for. I have to say that even though Im not a statistician, I never had these problems reading acedmic papers, on say econometrics. There are reasons why scholarship has certain conventions, and if the Gates people followed them, their reports would be more intellectually honest. But now I realize they meant what was reported and that their simulation would mean that RIFs based on effectiveness would increase student performance by up to 3.5 months. They just didn’t say how they did the simulation. I’m assuming now that they meant that using VAMs to determine which 2200 teachers get layoff notices would mean that the students of 145 teachers would benefit by that much, but still they say little or nothing on how they reached the headlined conclusion, in contrast to the detail they announced for minor points. For instance, could they be running a simulation where those gains would be produced by replacing a senior teacher with low test score gains by a teacher in another type of school in another district who had high gains? That seems too absurd. But it also seems absurd that they run a simulation without reporting what that simulation was. Sorry to bother you about this.

Stuart - thank you for the comment, as always. I am very much aware of that paper. In fact, it is cited in this post, along with the very point that you make: that VA scores are lower in high-poverty schools, but not drastically so. I suppose I might have repeated it, or moved it to the section you quote, but I don’t think I muddied the waters. Anyway, you're correct that teachers who think they're *guaranteed* to get better VA scores in lower-poverty schools are misinformed, but your claim that "a value-added system takes poverty into account" is true only to a degree (as you know). There are unobserved advantages to working in a lower-poverty school that the models don’t capture (e.g., peer effects, school environment), even if they don’t translate into huge aggregate differences. Consider also that, as Sass et al. find, the returns to experience seem to be stronger in lower-poverty schools. Finally, I would bet that many movers would be motivated by working conditions, rather than job security (I might have made this more clear). For example, teachers might move to more affluent schools to avoid the exacerbated turnover problem that many high-poverty schools would likely face under this policy. Thanks again. Please keep reading and commenting.

Thanks for these clarifying remarks. I hadn't seen that you linked to that paper. I do think that if there's a problem of teachers migrating away from poorer schools under any regime that takes value-added into account, that migration will be mainly the result of teachers failing to understand what value-added really means. Even if researchers can't fully take everything into account, they can select the basis of comparison: to other teachers in the same school or to teachers across a district. It seems intuitive to me that if teachers in a poor school are being compared only to other teachers within the same school, it is much harder to make the excuse that poor value-added scores are due to anything unobservable about the school -- the other teachers to whom you're being compared suffer from the same school-wide obstacles. (Other objections to value-added remain, of course, such as a small n for any given teacher.)

Thanks for the detailed and thoughtful analysis of an idea that usually receives only a superficial glance. I also agree with you, and I think polls support the idea, that work conditions are the #1 motivating factor in staying at a school or leaving. Stuart, I must take issue with your comment about teacher comparisons within schools. There is a considerable obstacle in the fact that teacher-student assignments are not random. Some VAM research found "false positives" - correlation between fifth-grade teachers and fourth-graders test scores. I don't think I've ever heard of a school engaging in any truly random assignment unless it was for a study! And as an elementary school parent, I wouldn't want random assignment - I prefer thoughtful assignment. Then as a secondary school teacher, I can tell you that there are HUGE variables among sections of the same course, with students drawn from the same pool. The pushes and pulls on a high school schedule ensure that certain clusters will form and move through their day together. If your class meets at the same time as certain honors or remedial classes, you'll have the contrasting group disproportionately represented in your room. If you teach one high-needs special education student with an instructional aide, there's an extra adult in the room and usually a positive effect. Same student, different time of day, no aide, and the class is harder to teach. I could go on and on.

David, You’re correct: There’s a pretty solid body of evidence showing that non-economic working conditions – rather than salary or job security – are the primary factor driving mobility decisions. The characteristics of students seem particularly important. For example, see: http://edpro.stanford.edu/hanushek/admin/pages/files/uploads/Hanushek+Kain+Rivkin%202004%20JHumRes%20392.pdf But salary does matter too: http://faculty.smu.edu/millimet/classes/eco7321/papers/clotfelter%20et%20al%2003.pdf Here’s a good review of the retention literature: http://www.aera.net/uploadedFiles/Publications/Journals/Review_of_Educational_Research/7602/04_RER_Guarino.pdf Thanks for the comment.

Just to be clear, since there appears to be some confusion, nothing in these calculations or in the accompanying article says anything about test-based decision making or firing. Value-added measures do provide information, but nobody advocates making decisions solely on the basis of such scores. What the article says is that the bottom teachers are harming kids and that we need to find a way to do something about that. The best would be to transform these teachers -- through coaching, professional development, or what have you -- into better teachers. Unfortunately, we have been unable to find a way to do that systematically and consistently. The continual citation of Finland does not help either. What the Finish have learned is how to make sure that an ineffective teacher does not remain in the classroom for very long. This is something we have to learn in the U.S. I also do not understand why the vast majority of hardworking and able teachers are willing to be lumped together with the small number of truly ineffective teachers. It surely is not any confusion about who the ineffective teachers are. Parents, other teachers, and principals do appear to know who the ineffective teachers are. Developing a good evaluation system for teachers would be a start. Again, we have talked about that for many years, but it has not happened in many districts.

David -- non-random assignment is one of the other potential problems to which I referred, but I don't see how it has anything to do with the problem I was addressing: the allegation that teachers will leave high-poverty schools in droves because they will be afraid of low value-added scores. If teachers in high-poverty schools are compared to other teachers within the same school, then the fact that the school is high-poverty -- in and of itself -- ought to have no effect on the value-added scores.

Mr. Hanushek, Your comment is much appreciated. While I understand what you’re saying about the confusion, I do think I characterized your argument in the manner you describe. I pointed out that it wasn’t an actual policy proposal, but rather an illustration. I also noted your position that improvement is the preferable course. If this was not clear enough, I apologize. Nevertheless, your own words are easily misunderstood. In this chapter, in the front end, you write, “This discussion provides a quantitative statement of one approach to achieving the governors’ (and the nation’s) goals – teacher deselection. Specifically, how much progress in student achievement could be accomplished by instituting a program of removing, or deselecting, the least-effective teachers?” And the approach consists of deselection based entirely on value-added estimates. This type of statement might be easily interpreted in a manner quite different from your comment. Surely you know how subtlety is lost in our public discourse, and how, taken literally, your calculation represents the intoxicating promise of a “quick fix.” And, indeed, I have heard many people misuse your research to advocate, implicitly or explicitly, for a policy of systematic firing based solely or predominantly on value-added estimates. Perhaps you aren’t aware of how often this happens. So many people with whom I have spoken were surprised, reading my post, to learn that you favor, albeit with skepticism, improvement over dismissals. Correct that misperception. I realize you’re a researcher and not an advocate, but your voice carries a lot of weight. When you speak to reporters and policymakers, I hope you lead off with the improvement message. I hope you tell them that evaluations and other measures to increase effectiveness should be our priority. For whatever it’s worth, you’d get tremendous support from many people, including many thousands of the great teachers you celebrate. Thanks again, Matt

Pages

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.