• The Impact Of Race To The Top Is An Open Question (But At Least It's Being Asked)

    You don’t have to look very far to find very strong opinions about Race to the Top (RTTT), the U.S. Department of Education’s (USED) stimulus-funded state-level grant program (which has recently been joined by a district-level spinoff). There are those who think it is a smashing success, while others assert that it is a dismal failure. The truth, of course, is that these claims, particularly the extreme views on either side, are little more than speculation.*

    To win the grants, states were strongly encouraged to make several different types of changes, such as adoption of new standards, the lifting/raising of charter school caps, the installation of new data systems and the implementation of brand new teacher evaluations. This means that any real evaluation of the program’s impact will take some years and will have to be multifaceted – that is, it is certain that the implementation/effects will vary not only by each of these components, but also between states.

    In other words, the success or failure of RTTT is an empirical question, one that is still almost entirely open. But there is a silver lining here: USED is at least asking that question, in the form of a five-year, $19 million evaluation program, administered through the National Center for Education Evaluation and Regional Assistance, designed to assess the impact and implementation of various RTTT-fueled policy changes, as well as those of the controversial School Improvement Grants (SIGs).

  • Do Top Teachers Produce "A Year And A Half Of Learning?"

    One claim that gets tossed around a lot in education circles is that “the most effective teachers produce a year and a half of learning per year, while the least effective produce a half of a year of learning."

    This talking point is used all the time in advocacy materials and news articles. Its implications are pretty clear: Effective teachers can make all the difference, while ineffective teachers can do permanent damage.

    As with most prepackaged talking points circulated in education debates, the “year and a half of learning” argument, when used without qualification, is both somewhat valid and somewhat misleading. So, seeing as it comes up so often, let’s very quickly identify its origins and what it means.

  • Who's Afraid of Virginia's Proficiency Targets?

    The accountability provisions in Virginia’s original application for “ESEA flexibility” (or "waiver") have received a great deal of criticism (see here, here, here and here). Most of this criticism focused on the Commonwealth's expectation levels, as described in “annual measurable objectives” (AMOs) – i.e., the statewide proficiency rates that its students are expected to achieve at the completion of each of the next five years, with separate targets established for subgroups such as those defined by race (black, Hispanic, Asian, white), income (subsidized lunch eligibility), limited English proficiency (LEP), and special education.

    Last week, in response to the criticism, Virginia agreed to amend its application, and it’s not yet clear how specifically they will calculate the new rates (only that lower-performing subgroups will be expected to make faster progress).

    In the meantime, I think it’s useful to review a few of the main criticisms that have been made over the past week or two and what they mean. The actual table containing the AMOs is pasted below (for math only; reading AMOs will be released after this year, since there’s a new test).

  • Five Recommendations For Reporting On (Or Just Interpreting) State Test Scores

    From my experience, education reporters are smart, knowledgeable, and attentive to detail. That said, the bulk of the stories about testing data – in big cities and suburbs, in this year and in previous years – could be better.

    Listen, I know it’s unreasonable to expect every reporter and editor to address every little detail when they try to write accessible copy about complicated issues, such as test data interpretation. Moreover, I fully acknowledge that some of the errors to which I object – such as calling proficiency rates “scores” – are well within tolerable limits, and that news stories need not interpret data in the same way as researchers. Nevertheless, no matter what you think about the role of test scores in our public discourse, it is in everyone’s interest that the coverage of them be reliable. And there are a few mostly easy suggestions that I think would help a great deal.

    Below are five such recommendations. They are of course not meant to be an exhaustive list, but rather a quick compilation of points, all of which I’ve discussed in previous posts, and all of which might also be useful to non-journalists.

  • How Can We Tell If Vouchers "Work"?

    Brookings recently released an evaluation of New York City’s voucher program, called the School Choice Scholarship Foundation Program (SCSF), which was implemented in the late 1990s. Voucher offers were randomized, and the authors looked at the impact of being offered/accepting them on a very important medium-term outcome – college enrollment (they were also able to follow an unusually high proportion of the original voucher recipients to check this outcome).

    The short version of the story is that, overall, the vouchers didn’t have any statistically discernible impact on college enrollment. But, as is often the case, there was some underlying variation in the results, including positive estimated impacts among African-American students, which certainly merit discussion.*

    Unfortunately, such nuance was not always evident in the coverage of and reaction to the report, with some voucher supporters (strangely, given the results) exclaiming that the program was an unqualified success, and some opponents questioning the affiliations of the researchers. For my part, I’d like to make a quick, not-particularly-original point about voucher studies in general: Even the best of them don’t necessarily tell us much about whether “vouchers work."

  • Jobs, Freedom And Mr. March On Washington

    Today is the 49th  anniversary of the historic 1963 “March on Washington for Jobs and Freedom” in a year that marks the centennial of the birth of Bayard Rustin, the march’s principal organizer and chief strategist, referred to at the time as "Mr. March on Washington." Here, we reprint Albert Shanker’s 1987 eulogy to Rustin, who served as a mentor to both Shanker and Rev. Martin Luther King, Jr.

    The death of Bayard Rustin last week is an incalculable loss to our country and the world. He was the last of the great giants - A. Philip Randolph, Martin Luther King, Jr. and Roy Wilkins - who brought us a grand, humane social vision and a dream of an integrated, democratic nation. I have lost a dear personal friend and inspiration.

    Bayard was a gifted leader, but he headed no mass organization. His extraordinary influence came not from numbers and money but from his intense moral, intellectual and physical courage. He was a black man, a Quaker, a one-time pacifist, a political and social dissident, a member of many and often despised minority groups, yet he always believed in the necessity of coalition politics to enable minorities to build majorities in support of lasting progress.

    He was a penetrating critic who had no use for those whose criticism merely destroyed and did not present a constructive program for change. He was an intellectual who could act and a visionary for whom no organizational detail was too trivial if it moved dreams to reality. Over his lifetime, Bayard was called everything from a dangerous revolutionary to a sellout conservative. The truth is that Bayard was a true democrat in a world of pretenders. Unlike those who lived by double standards and expediency, he remained constant to the principles and goals of democracy no matter what forces or insult were hurled against him.

  • Student Attrition Is A Core Feature Of School Choice, Not A Bug

    The issue of student attrition at KIPP and charter schools is never far beneath the surface of our education debates. KIPP’s critics claim that these schools exclude or “counsel out” students who aren’t doing well, thus inflating student test results. Supporters contend that KIPP schools are open admission with enrollment typically determined by lottery, and they usually cite a 2010 Mathematica report finding strong results among students in most (but not all) of 22 KIPP middle schools, as well as attrition rates that were no higher, on average, than at the regular public schools to which they are compared.*

    As I have written elsewhere, I am persuaded that student attrition cannot explain away the gains that Mathematica found in the schools they examined (though I do think peer effects of attrition without replacement may play some role, which is a very common issue in research of this type).

    But, beyond this back-and-forth over the churn in these schools and whether it affected the results of this analysis, there’s also a confusion of sorts when it comes to discussions of student attrition in charters, whether KIPP or in general. Supporters of school choice often respond to “attrition accusations” by trying to deny or downplay its importance or frequency. This, it seems to me, ignores an obvious point: Within-district attrition - students changing schools, often based on “fit” or performance - is a defining feature of school choice, not an aberration.

  • A Look At The Changes To D.C.'s Teacher Evaluation System

    D.C. Public Schools (DCPS) recently announced a few significant changes to its teacher evaluation system (called IMPACT), including the alteration of its test-based components, the creation of a new performance category (“developing”), and a few tweaks to the observational component (discussed below). These changes will be effective starting this year.

    As with any new evaluation system, a period of adjustment and revision should be expected and encouraged (though it might be preferable if the first round of changes occurs during a phase-in period, prior to stakes becoming attached). Yet, despite all the attention given to the IMPACT system over the past few years, these new changes have not been discussed much beyond a few quick news articles.

    I think that’s unfortunate: DCPS is an early adopter of the “new breed” of teacher evaluation policies being rolled out across the nation, and any adjustments to IMPACT’s design – presumably based on results and feedback – could provide valuable lessons for states and districts in earlier phases of the process.

    Accordingly, I thought I would take a quick look at three of these changes.

  • College For All; Good Jobs For A Few?

    A recent study by the Center for Policy Research (CEPR) asks the question that must be on the minds of college grads, now working as coffee shop baristas: “Where Have All the Good Jobs Gone?" The answer: swallowed by corporate profits and the personal portfolios of the ultrawealthy.

    Despite the fact that the American economy has experienced “enormous” productivity gains since the late 1970’s, the study finds that the number of “good jobs” (defined as those paying at least $37,000 per year, with employer-provided health insurance and an employer-sponsored retirement plan) has declined from 27.4 percent in 1979 to 24.6 percent in 2010.  This discouraging trend was strong even before the onset of the country’s economic crisis: in 2007, the year before the onset of the recession, only 25 percent of college grads had “good jobs."

    CEPR notes that the prevailing explanations for the failure to share productivity gains are “technology” and lack of necessary skills among American workers. But, if this were true, the CEPR study argues, one would expect college grads to have a higher share of good jobs than they did 30 years ago. They don’t. Instead, at every age level, today’s college grads are less likely to have a “good job” than their 1970s counterparts. This is especially surprising, the researchers note, since twice as many Americans now have advanced degrees as compared to the 1970’s.

  • Large Political Stones, Methodological Glass Houses

    Earlier this summer, the New York City Independent Budget Office (IBO) presented findings from a longitudinal analysis of NYC student performance. That is, they followed a cohort of over 45,000 students from third grade in 2005-06 through 2009-10 (though most results are 2005-06 to 2008-09, since the state changed its definition of proficiency in 2009-10).

    The IBO then simply calculated the proportion of these students who improved, declined or stayed the same in terms of the state’s cutpoint-based categories (e.g., Level 1 ["below basic" in NCLB parlance], Level 2 [basic], Level 3 [proficient], Level 4 [advanced]), with additional breakdowns by subgroup and other variables.

    The short version of the results is that almost two-thirds of these students remained constant in their performance level over this time period – for instance, students who scored at Level 2 (basic) in third grade in 2006 tended to stay at that level through 2009; students at the “proficient” level remained there, and so on. About 30 percent increased a category over that time (e.g., going from Level 1 to Level 2).

    The response from the NYC Department of Education (NYCDOE) was somewhat remarkable. It takes a minute to explain why, so bear with me.