• Are Charter Schools Better Able To Fire Low-Performing Teachers?

    Charter schools, though they comprise a remarkably diverse sector, are quite often subject to broad generalizations. Opponents, for example, promote the characterization of charters as test prep factories, though this is a sweeping claim without empirical support. Another common stereotype is that charter schools exclude students with special needs. It is often (but not always) true that charters serve disproportionately fewer students with disabilities, but the reasons for this are complicated and vary a great deal, and there is certainly no evidence for asserting a widespread campaign of exclusion.

    Of course, these types of characterizations, which are also leveled frequently at regular public schools, don't always take the form of criticism. For instance, it is an article of faith among many charter supporters that these schools, thanks to the fact that relatively few are unionized, are better able to aggressively identify and fire low-performing teachers (and, perhaps, retain high performers). Unlike many of the generalizations from both "sides," this one is a bit more amenable to empirical testing.

    A recent paper by Joshua Cowen and Marcus Winters, published in the journal Education Finance and Policy, is among the first to take a look, and some of the results might be surprising.

  • Moving From Ideology To Evidence In The Debate About Public Sector Unions

    Drawing on a half century of empirical evidence, as well as new data and analysis, a team of scholars has  challenged the substance of many of the attacks on public employees and their unions –urging political leaders and the research community to take this “transformational” moment in the divisive and ideologically driven debate over the role of government and the value of public services to deepen their commitment to evidence-based policy ideas.

    These arguments were outlined in "The Great New Debate about Unionism and Collective Bargaining in U.S. State and Local  Governments," published by Cornell University’s ILR Review.  The authors – David Lewin (UCLA), Jeffrey Keefe (Rutgers), and Thomas Kochan (MIT) – point out that, with half a century of experience, there is now a wealth of data by which to evaluate public sector unionism and its effects.

    In that context, the authors spell out the history, arguments and empirical findings on three key issues: 1) Are public employees overpaid?; 2) Do labor-management dispute resolution procedures, which are part of many state and local government collective bargaining laws, enhance or hinder effective governance?; 3) Have unions and managers in the public sector demonstrated the ability to respond constructively to fiscal crises?

  • A Few Points About The Instability Of Value-Added Estimates

    One of the most frequent criticisms of value-added and other growth models is that they are "unstable" (or, more accurately, modestly stable). For instance, a teacher who is rated highly in one year might very well score toward the middle of the distribution – or even lower – in the next year (see here, here and here, or this accessible review).

    Some of this year-to-year variation is “real." A teacher might get better over the course of a year, or might have a personal problem that impedes their job performance. In addition, there could be changes in educational circumstances that are not captured by the models – e.g., a change in school leadership, new instructional policies, etc. However, a great deal of the the recorded variation is actually due to sampling error, or idiosyncrasies in student testing performance. In other words, there is a lot of “purely statistical” imprecision in any given year, and so the scores don’t always “match up” so well between years. As a result, value-added critics, including many teachers, argue that it’s not only unfair to use such error-prone measures for any decisions, but that it’s also bad policy, since we might reward or punish teachers based on estimates that could be completely different the next year.

    The concerns underlying these arguments are well-founded (and, often, casually dismissed by supporters and policymakers). At the same time, however, there are a few points about the stability of value-added (or lack thereof) that are frequently ignored or downplayed in our public discourse. All of them are pretty basic and have been noted many times elsewhere, but it might be useful to discuss them very briefly. Three in particular stand out.

  • When Growth Isn't Really Growth

    Let’s try a super-simple thought experiment with data. Suppose we have an inner-city middle school serving grades 6-8. Students in all three grades take the state exam annually (in this case, we’ll say that it’s at the very beginning of the year). Now, for the sake of this illustration, let’s avail ourselves of the magic of hypotheticals and assume away many of the sources of error that make year-to-year changes in public testing data unreliable.

    First, we’ll say that this school reports test scores instead of proficiency rates, and that the scores are comparable between grades. Second, every year, our school welcomes a new cohort of sixth graders that is the exact same size and has the exact same average score as preceding cohorts – 30 out of 100, well below the state average of 65. Third and finally, there is no mobility at this school. Every student who enters sixth grade stays there for three years, and goes to high school upon completion of eighth grade. No new students are admitted mid-year.

    Okay, here’s where it gets interesting: Suppose this school is phenomenally effective in boosting its students’ scores. In fact, each year, every single student gains 20 points. It is the highest growth rate in the state. Believe it or not, using the metrics we commonly use to judge schoolwide “growth” or "gains," this school would still look completely ineffective. Take a look at the figure below.

  • A Simple Choice Of Words Can Help Avoid Confusion About New Test Results

    In 1998, the National Institutes of Health (NIH) lowered the threshold at which people are classified as “overweight." Literally overnight, about 25 million Americans previously considered as having a healthy weight were now overweight. If, the next day, you saw a newspaper headline that said “number of overweight Americans increases," you would probably find that a little misleading. America’s “overweight” population didn’t really increase; the definition changed.

    Fast forward to November 2012, during which Kentucky became the first state to release results from new assessments that were aligned with the Common Core Standards (CCS). This led to headlines such as, "Scores Drop on Kentucky’s Common Core-Aligned Tests" and "Challenges Seen as Kentucky’s Test Scores Drop As Expected." Yet, these descriptions unintentionally misrepresent what happened. It's not quite accurate - or at least highly imprecise - to say that test scores “dropped," just as it would have been wrong to say that the number of overweight Americans increased overnight in 1998 (actually, they’re not even scores, they’re proficiency rates). Rather, the state adopted different tests, with different content, a different design, and different standards by which students are deemed “proficient."

    Over the next 2-3 years, a large group of states will also release results from their new CCS-aligned tests. It is important for parents, teachers, administrators, and other stakeholders to understand what the results mean. Most of them will rely on newspapers and blogs, and so one exceedingly simple step that might help out is some polite, constructive language-policing.

  • The Test-Based Evidence On The "Florida Formula"

    ** Reprinted here in the Washington Post

    Former Florida Governor Jeb Bush has become one of the more influential education advocates in the country. He travels the nation armed with a set of core policy prescriptions, sometimes called the “Florida formula," as well as "proof" that they work. The evidence that he and his supporters present consists largely of changes in average statewide test scores – NAEP and the state exam (FCAT) – since the reforms started going into place. The basic idea is that increases in testing results are the direct result of these policies.

    Governor Bush is no doubt sincere in his effort to improve U.S. education, and, as we'll see, a few of the policies comprising the “Florida formula” have some test-based track record. However, his primary empirical argument on their behalf – the coincidence of these policies’ implementation with changes in scores and proficiency rates – though common among both “sides” of the education debate, is simply not valid. We’ve discussed why this is the case many times (see here, here and here), as have countless others, in the Florida context as well as more generally.*

    There is no need to repeat those points, except to say that they embody the most basic principles of data interpretation and causal inference. It would be wonderful if the evaluation of education policies – or of school systems’ performance more generally - was as easy as looking at raw, cross-sectional testing data. But it is not.

    Luckily, one need not rely on these crude methods. We can instead take a look at some of the rigorous research that has specifically evaluated the core reforms comprising the “Florida formula." As usual, it is a far more nuanced picture than supporters (and critics) would have you believe.

  • The Year In Research On Market-Based Education Reform: 2012 Edition

    ** Reprinted here in the Washington Post

    2012 was another busy year for market-based education reform. The rapid proliferation of charter schools continued, while states and districts went about the hard work of designing and implementing new teacher evaluations that incorporate student testing data, and, in many cases, performance pay programs to go along with them.

    As in previous years (see our 2010 and 2011 reviews), much of the research on these three “core areas” – merit pay, charter schools, and the use of value-added and other growth models in teacher evaluations – appeared rather responsive to the direction of policy making, but could not always keep up with its breakneck pace.*

    Some lag time is inevitable, not only because good research takes time, but also because there's a degree to which you have to try things before you can see how they work. Nevertheless, what we don't know about these policies far exceeds what we know, and, given the sheer scope and rapid pace of reforms over the past few years, one cannot help but get the occasional “flying blind" feeling. Moreover, as is often the case, the only unsupportable position is certainty.

  • The Sensitive Task Of Sorting Value-Added Scores

    The New Teacher Project’s (TNTP) recent report on teacher retention, called “The Irreplaceables," garnered quite a bit of media attention. In a discussion of this report, I argued, among other things, that the label “irreplaceable” is a highly exaggerated way of describing their definitions, which, by the way, varied between the five districts included in the analysis. In general, TNTP's definitions are better-described as “probably above average in at least one subject" (and this distinction matters for how one interprets the results).

    I’d like to elaborate a bit on this issue – that is, how to categorize teachers’ growth model estimates, which one might do, for example, when incorporating them into a final evaluation score. This choice, which receives virtually no discussion in TNTP’s report, is always a judgment call to some degree, but it’s an important one for accountability policies. Many states and districts are drawing those very lines between teachers (and schools), and attaching consequences and rewards to the outcomes.

    Let's take a very quick look, using the publicly-released 2010 “teacher data reports” from New York City (there are details about the data in the first footnote*). Keep in mind that these are just value-added estimates, and are thus, at best, incomplete measures of the performance of teachers (however, importantly, the discussion below is not specific to growth models; it can apply to many different types of performance measures).

  • Are Teachers Changing Their Minds About Education Reform?

    ** Reprinted here in the Washington Post

    In a recent Washington Post article called “Teachers leaning in favor of reforms," veteran reporter Jay Mathews puts forth an argument that one hears rather frequently – that teachers are “changing their minds," in a favorable direction, about the current wave of education reform. Among other things, Mr. Mathews cites two teacher surveys. One of them, which we discussed here, is a single-year survey that doesn't actually look at trends, and therefore cannot tell us much about shifts in teachers’ attitudes over time (it was also a voluntary online survey).

    His second source, on the other hand, is in fact a useful means of (cautiously) assessing such trends (though the article doesn't actually look at them). That is the Education Sector survey of a nationally-representative sample of U.S. teachers, which they conducted in 2003, 2007 and, most recently, in 2011.

    This is a valuable resource. Like other teacher surveys, it shows that educators’ attitudes toward education policy are diverse. Opinions vary by teacher characteristics, context and, of course, by the policy being queried. Moreover, views among teachers can (and do) change over time, though, when looking at cross-sectional surveys, one must always keep in mind that observed changes (or lack thereof) might be due in part to shifts in the characteristics of the teacher workforce. There's an important distinction between changing minds and changing workers (which Jay Mathews, to his great credit, discusses in this article).*

    That said, when it comes to the many of the more controversial reforms happening in the U.S., those about which teachers might be "changing their minds," the results of this particular survey suggest, if anything, that teachers’ attitudes are actually quite stable.