• Teacher Leadership As A School Improvement Strategy

    Our guest author today is David B. Cohen, a National Board Certified high school English teacher in Palo Alto, CA, and the associate director of Accomplished California Teachers (ACT). His blog is at InterACT.

    As we settle into 2013, I find myself increasingly optimistic about the future of the teaching profession. There are battles ahead, debates to be had and elections to be contested, but, as Sam Cooke sang, “A change is gonna come."

    The change that I’m most excited about is the potential for a shift towards teacher leadership in schools and school systems. I’m not naive enough to believe it will be a linear or rapid shift, but I’m confident in the long-term growth of teacher leadership because it provides a common ground for stakeholders to achieve their goals, because it’s replicable and scalable, and because it’s working already.

    Much of my understanding of school improvement comes from my teaching career - now approaching two decades in the classroom, mostly in public high schools. However, until six years ago, I hadn’t seen teachers putting forth a compelling argument about how we might begin to transform our profession. A key transition for me was reading a Teacher Solutions report from the Center for Teaching Quality (CTQ). That 2007 report, Performance-Pay for Teachers: Designing a System that Students Deserve, showed how the concept of performance pay could be modified and improved upon with better definitions of a variety of performance, and differentiated pay based on differentiated professional practice, rather than arbitrary test score targets. I ended up joining the CTQ Teacher Leaders Network the same year, and have had the opportunity ever since to learn from exceptional teachers from around the country.

  • Revisiting The "Best Evidence" Theory Of Charter School Performance

    Among the more persistent arguments one hears in the debate over charter schools is that the “best evidence” shows charters are more effective. I have discussed this issue before (as have others), but it seems to come up from time to time, even in mainstream media coverage.

    The basic point is that we should essentially dismiss – or at least regard with extreme skepticism - the two dozen or so high-quality “non-experimental” studies, which, on the whole, show modest or no differences in test-based effectiveness between charters and comparable regular public schools. In contrast, “randomized controlled trials” (RCTs), which exploit the random assignment of admission lotteries to control for differences between students, tend to yield positive results. Since, so the story goes, the “gold standard” research shows that charters are superior, we should go with that conclusion.

    RCTs, though not without their own limitations, are without question powerful, and there is plenty of subpar charter research out there. That said, however, the “best evidence” argument is not particularly compelling (and it's also a distraction from the positive shift away from obsessing about whether charters do or don't work toward an examination of why). A full discussion of the methodological issues in the charter school literature would be long and burdensome, but it might be helpful to lay out three very basic points to bear in mind when you hear this argument.

  • Why Did Florida Schools' Grades Improve Dramatically Between 1999 and 2005?

    ** Reprinted here in the Washington Post

    Former Florida Governor Jeb Bush was in Virginia last week, helping push for a new law that would install an “A-F” grading system for all public schools in the commonwealth, similar to a system that has existed in Florida for well over a decade.

    In making his case, Governor Bush put forth an argument about the Florida system that he and his supporters use frequently. He said that, right after the grades went into place in his state, there was a drop in the proportion of D and F schools, along with a huge concurrent increase in the proportion of A schools. For example, as Governor Bush notes, in 1999, only 12 percent of schools got A's. In 2005, when he left office, the figure was 53 percent. The clear implication: It was the grading of schools (and the incentives attached to the grades) that caused the improvements.

    There is some pretty good evidence (also here) that the accountability pressure of Florida’s grading system generated modest increases in testing performance among students in schools receiving F's (i.e., an outcome to which consequences were attached), and perhaps higher-rated schools as well. However, putting aside the serious confusion about what Florida’s grades actually measure, as well as the incorrect premise that we can evaluate a grading policy's effect by looking at the simple distribution of those grades over time, there’s a much deeper problem here: The grades changed in part because the criteria changed.

  • A New Twist On The Skills "Blame Game"

    It is conventional wisdom that the United States is suffering from a severe skills shortage, for which low-performing public schools and inadequate teachers must shoulder part of the blame (see here and here, for example).  Employers complain that they cannot fill open slots because there are no Americans skilled enough to fill them, while pundits and policymakers – President Barack Obama and Bill Gates, among them – respond by pushing for unproven school reform proposals, in a desperate effort to rebuild American economic competitiveness.

    But, what if these assumptions are all wrong?

    What if the deficiencies of our educational system have little to do with our current competitiveness woes? A fascinating new book by Peter Cappelli, Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It , builds a strong case that common business practices - failure to invest adequately in on-the-job training, offering noncompetitive wages and benefits, and relying on poorly designed computer algorithms to screen applicants –are to blame, not failed schools or poorly prepared applicants.

  • A Few Quick Fixes For School Accountability Systems

    Our guest authors today are Morgan Polikoff and Andrew McEachin. Morgan is Assistant Professor in the Rossier School of Education at the University of Southern California. Andrew is an Institute of Education Science postdoctoral fellow at the University of Virginia.

    In a previous post, we described some of the problems with the Senate's Harkin-Enzi plan for reauthorizing the No Child Left Behind Act, based on our own analyses, which yielded three main findings. First, selecting the bottom 5% of schools for intervention based on changes in California’s composite achievement index resulted in remarkably unstable rankings. Second, identifying the bottom 5% based on schools' lowest performing subgroup overwhelmingly targeted those serving larger numbers of special education students. Third and finally, we found evidence that middle and high schools were more likely to be identified than elementary schools, and smaller schools more likely than larger schools.

    None of these findings was especially surprising (see here and here, for instance), and could easily have been anticipated. Thus, we argued that policymakers need to pay more attention to the vast (and rapidly expanding) literature on accountability system design.

  • Why Nobody Wins In The Education "Research Wars"

    ** Reprinted here in the Washington Post

    In a recent post, Kevin Drum of Mother Jones discusses his growing skepticism about the research behind market-based education reform, and about the claims that supporters of these policies make. He cites a recent Los Angeles Times article, which discusses how, in 2000, the San Jose Unified School District in California instituted a so-called “high expectations” policy requiring all students to pass the courses necessary to attend state universities. The reported percentage of students passing these courses increased quickly, causing the district and many others to declare the policy a success. In 2005, Los Angeles Unified, the nation's second largest district, adopted similar requirements.

    For its part, the Times performed its own analysis, and found that the San Jose pass rate was actually no higher in 2011 compared with 2000 (actually, slightly lower for some subgroups), and that the district had overstated its early results by classifying students in a misleading manner. Mr. Drum, reviewing these results, concludes: “It turns out it was all a crock."

    In one sense, that's true – the district seems to have reported misleading data. On the other hand, neither San Jose Unified's original evidence (with or without the misclassification) nor the Times analysis is anywhere near sufficient for drawing conclusions - "crock"-based or otherwise - about the effects of this policy. This illustrates the deeper problem here, which is less about one “side” or the other misleading with research, but rather something much more difficult to address: Common misconceptions that impede deciphering good evidence from bad.

  • Living In The Tails Of The Rhetorical And Teacher Quality Distributions

    A few weeks ago, Students First NY (SFNY) released a report, in which they presented a very simple analysis of the distribution of “unsatisfactory” teacher evaluation ratings (“U-ratings”) across New York City schools in the 2011-12 school year.

    The report finds that U-ratings are distributed unequally. In particular, they are more common in schools with higher poverty, more minorities, and lower proficiency rates. Thus, the authors conclude, the students who are most in need of help are getting the worst teachers.

    There is good reason to believe that schools serving larger proportions of disadvantaged students have a tougher time attracting, developing and retaining good teachers, and there is evidence of this, even based on value-added estimates, which adjust for these characteristics (also see here). However, the assumptions upon which this Students First analysis is based are better seen as empirical questions, and, perhaps more importantly, the recommendations they offer are a rather crude, narrow manifestation of market-based reform principles.

  • Value-Added As A Screening Device: Part II

    Our guest author today is Douglas N. Harris, associate professor of economics and University Endowed Chair in Public Education at Tulane University in New Orleans. His latest bookValue-Added Measures in Education, provides an accessible review of the technical and practical issues surrounding these models. 

    This past November, I wrote a post for this blog about shifting course in the teacher evaluation movement and using value-added as a “screening device.”  This means that the measures would be used: (1) to help identify teachers who might be struggling and for whom additional classroom observations (and perhaps other information) should be gathered; and (2) to identify classroom observers who might not be doing an effective job.

    Screening takes advantage of the low cost of value-added and the fact that the estimates are more accurate in making general assessments of performance patterns across teachers, while avoiding the weaknesses of value-added—especially that the measures are often inaccurate for individual teachers, as well as confusing and not very credible among teachers when used for high-stakes decisions.

    I want to thank the many people who responded to the first post. There were three main camps.

  • Making Sense Of Florida's School And Teacher Performance Ratings

    Last week, Florida State Senate President Don Gaetz (R – Niceville) expressed his skepticism about the recently-released results of the state’s new teacher evaluation system. The senator was particularly concerned about his comparison of the ratings with schools’ “A-F” grades. He noted, “If you have a C school, 90 percent of the teachers in a C school can’t be highly effective. That doesn’t make sense."

    There’s an important discussion to be had about the results of both the school and teacher evaluation systems, and the distributions of the ratings can definitely be part of that discussion (even if this issue is sometimes approached in a superficial manner). However, arguing that we can validate Florida’s teacher evaluations using its school grades, or vice-versa, suggests little understanding of either. Actually, given the design of both systems, finding a modest or even weak association between them would make pretty good sense.

    In order to understand why, there are two facts to consider.

  • The Cartography Of High Expectations

    In October of last year, the education advocacy group ConnCAN published a report called “The Roadmap to Closing the Gap” in Connecticut. This report says that the state must close its large achievement gaps by 2020 – that is, within eight years – and they use to data to argue that this goal is “both possible and achievable."

    There is value in compiling data and disaggregating them by district and school. And ConnCAN, to its credit, doesn't use this analysis as a blatant vehicle to showcase its entire policy agenda, as advocacy organizations often do. But I am compelled to comment on this report, mostly as a springboard to a larger point about expectations.

    However, first things first – a couple of very quick points about the analysis. There are 60-70 pages of district-by-district data in this report, all of it portrayed as a “roadmap” to closing Connecticut’s achievement gap. But it doesn't measure gaps and won't close them.