Skip to:

Accountability

  • A Case Against Assigning Single Ratings To Schools

    Written on November 26, 2012

    The new breed of school rating systems, some of which are still getting off the ground, will co-exist with federal proficiency targets in many states, and they are (or will be) used for a variety of purposes, including closure, resource allocation and informing parents and the public (see our posts on the systems in INFLOHCONYC).*

    The approach that most states are using, in part due to the "ESEA flexibility" guidelines set by the U.S. Department of Education, is to combine different types of measures, often very crudely, into a single grade or categorical rating for each school. Administrators and media coverage usually characterize these ratings as measures of school performance - low-rated schools are called "low performing," while those receiving top ratings are characterized as "high performing." That's not accurate - or, at best, it's only partially true.

    Some of the indicators that comprise the ratings, such as proficiency rates, are best interpreted as (imperfectly) describing student performance on tests, whereas other measures, such as growth model estimates, make some attempt to isolate schools’ contribution to that performance. Both might have a role to play in accountability systems, but they're more or less appropriate depending on how you’re trying to use them.

    So, here’s my question: Why do we insist on throwing them all together into a single rating for each school? To illustrate why I think this question needs to be addressed, let’s take a quick look at four highly-simplified situations in which one might use ratings.

    READ MORE
  • When You Hear Claims That Policies Are Working, Read The Fine Print

    Written on November 19, 2012

    When I point out that raw changes in state proficiency rates or NAEP scores are not valid evidence that a policy or set of policies is “working," I often get the following response: “Oh Matt, we can’t have a randomized trial or peer-reviewed article for everything. We have to make decisions and conclusions based on imperfect information sometimes."

    This statement is obviously true. In this case, however, it's also a straw man. There’s a huge middle ground between the highest-quality research and the kind of speculation that often drives our education debate. I’m not saying we always need experiments or highly complex analyses to guide policy decisions (though, in general, these are always preferred and sometimes required). The point, rather, is that we shouldn’t draw conclusions based on evidence that doesn't support those conclusions.

    This, unfortunately, happens all the time. In fact, many of the more prominent advocates in education today make their cases based largely on raw changes in outcomes immediately after (or sometimes even before) their preferred policies were implemented (also see hereherehereherehere, and here). In order to illustrate the monumental assumptions upon which these and similar claims ride, I thought it might be fun to break them down quickly, in a highly simplified fashion. So, here are the four “requirements” that must be met in order to attribute raw test score changes to a specific policy (note that most of this can be applied not only to claims that policies are working, but also to claims that they're not working because scores or rates are flat):

    READ MORE
  • The Structural Curve In Indiana's New School Grading System

    Written on November 1, 2012

    The State of Indiana has received a great deal of attention for its education reform efforts, and they recently announced the details, as well as the first round of results, of their new "A-F" school grading system. As in many other states, for elementary and middle schools, the grades are based entirely on math and reading test scores.

    It is probably the most rudimentary scoring system I've seen yet - almost painfully so. Such simplicity carries both potential advantages (easier for stakeholders to understand) and disadvantages (school performance is complex and not always amenable to rudimentary calculation).

    In addition, unlike the other systems that I have reviewed here, this one does not rely on explicit “weights," (i.e., specific percentages are not assigned to each component). Rather, there’s a rubric that combines absolute performance (passage rates) and proportions drawn from growth models (a few other states use similar schemes, but I haven't reviewed any of them).

    On the whole, though, it's a somewhat simplistic variation on the general approach most other states are taking -- but with a few twists.

    READ MORE
  • The Stability And Fairness Of New York City's School Ratings

    Written on October 8, 2012

    New York City has just released the new round of results from its school rating system (they're called “progress reports"). It relies considerably more on student growth (60 out of 100 points) than absolute performance (25 points), and there are efforts to partially adjust most of the measures via peer group comparisons.*

    All of this indicates that the city's system is more focused on school rather than student test-based performance, compared with many other systems around the U.S.

    The ratings are high-stakes. Schools receiving low grades – a D or F in any given year, or a C for three consecutive years – enter a review process by which they might be closed. The number of schools meeting these criteria jumped considerably this year.

    There is plenty of controversy to go around about the NYC ratings, much of it pertaining to two important features of the system. They’re worth discussing briefly, as they are also applicable to systems in other states.

    READ MORE
  • Does It Matter How We Measure Schools' Test-Based Performance?

    Written on September 19, 2012

    In education policy debates, we like the "big picture." We love to say things like “hold schools accountable” and “set high expectations." Much less frequent are substantive discussions about the details of accountability systems, but it’s these details that make or break policy. The technical specs just aren’t that sexy. But even the best ideas with the sexiest catchphrases won’t improve things a bit unless they’re designed and executed well.

    In this vein, I want to recommend a very interesting CALDER working paper by Mark Ehlert, Cory Koedel, Eric Parsons and Michael Podgursky. The paper takes a quick look at one of these extremely important, yet frequently under-discussed details in school (and teacher) accountability systems: The choice of growth model.

    When value-added or other growth models come up in our debates, they’re usually discussed en masse, as if they’re all the same. They’re not. It's well-known (though perhaps overstated) that different models can, in many cases, lead to different conclusions for the same school or teacher. This paper, which focuses on school-level models but might easily be extended to teacher evaluations as well, helps illustrate this point in a policy-relevant manner.

    READ MORE
  • Who's Afraid of Virginia's Proficiency Targets?

    Written on September 5, 2012

    The accountability provisions in Virginia’s original application for “ESEA flexibility” (or "waiver") have received a great deal of criticism (see here, here, here and here). Most of this criticism focused on the Commonwealth's expectation levels, as described in “annual measurable objectives” (AMOs) – i.e., the statewide proficiency rates that its students are expected to achieve at the completion of each of the next five years, with separate targets established for subgroups such as those defined by race (black, Hispanic, Asian, white), income (subsidized lunch eligibility), limited English proficiency (LEP), and special education.

    Last week, in response to the criticism, Virginia agreed to amend its application, and it’s not yet clear how specifically they will calculate the new rates (only that lower-performing subgroups will be expected to make faster progress).

    In the meantime, I think it’s useful to review a few of the main criticisms that have been made over the past week or two and what they mean. The actual table containing the AMOs is pasted below (for math only; reading AMOs will be released after this year, since there’s a new test).

    READ MORE
  • Five Recommendations For Reporting On (Or Just Interpreting) State Test Scores

    Written on September 4, 2012

    From my experience, education reporters are smart, knowledgeable, and attentive to detail. That said, the bulk of the stories about testing data – in big cities and suburbs, in this year and in previous years – could be better.

    Listen, I know it’s unreasonable to expect every reporter and editor to address every little detail when they try to write accessible copy about complicated issues, such as test data interpretation. Moreover, I fully acknowledge that some of the errors to which I object – such as calling proficiency rates “scores” – are well within tolerable limits, and that news stories need not interpret data in the same way as researchers. Nevertheless, no matter what you think about the role of test scores in our public discourse, it is in everyone’s interest that the coverage of them be reliable. And there are a few mostly easy suggestions that I think would help a great deal.

    Below are five such recommendations. They are of course not meant to be an exhaustive list, but rather a quick compilation of points, all of which I’ve discussed in previous posts, and all of which might also be useful to non-journalists.

    READ MORE
  • Large Political Stones, Methodological Glass Houses

    Written on August 20, 2012

    Earlier this summer, the New York City Independent Budget Office (IBO) presented findings from a longitudinal analysis of NYC student performance. That is, they followed a cohort of over 45,000 students from third grade in 2005-06 through 2009-10 (though most results are 2005-06 to 2008-09, since the state changed its definition of proficiency in 2009-10).

    The IBO then simply calculated the proportion of these students who improved, declined or stayed the same in terms of the state’s cutpoint-based categories (e.g., Level 1 ["below basic" in NCLB parlance], Level 2 [basic], Level 3 [proficient], Level 4 [advanced]), with additional breakdowns by subgroup and other variables.

    The short version of the results is that almost two-thirds of these students remained constant in their performance level over this time period – for instance, students who scored at Level 2 (basic) in third grade in 2006 tended to stay at that level through 2009; students at the “proficient” level remained there, and so on. About 30 percent increased a category over that time (e.g., going from Level 1 to Level 2).

    The response from the NYC Department of Education (NYCDOE) was somewhat remarkable. It takes a minute to explain why, so bear with me.

    READ MORE
  • The Louisiana Voucher Accountability Sweepstakes

    Written on August 9, 2012

    The situation with vouchers in Louisiana is obviously quite complicated, and there are strong opinions on both sides of the issue, but I’d like to comment quickly on the new “accountability” provision. It's a great example of how, too often, people focus on the concept of accountability and ignore how it is actually implemented in policy.

    Quick and dirty background: Louisiana will be allowing students to receive vouchers (tuition to attend private schools) if their public schools are sufficiently low-performing, according to their "school performance score" (SPS). As discussed here, the SPS is based primarily on how highly students score, rather than whether they’re making progress, and thus tells you relatively little about the actual effectiveness of schools per se. For instance, the vouchers will be awarded mostly to schools serving larger proportions of disadvantaged students, even if many of those schools are compelling large gains (though such progress cannot be assessed adequately using year-to-year changes in the SPS, which, due in part to its reliance on cross-sectional proficiency rates, are extremely volatile).

    Now, here's where things get really messy: In an attempt to demonstrate that they are holding the voucher-accepting private schools accountable, Louisiana officials have decided that they will make these private schools ineligible for the program if their performance is too low (after at least two years of participation in the program). That might be a good idea if the state measured school performance in a defensible manner. It doesn't.

    READ MORE
  • The Unfortunate Truth About This Year's NYC Charter School Test Results

    Written on July 23, 2012

    There have now been several stories in the New York news media about New York City’s charter schools’ “gains” on this year’s state tests (see hereherehere, here and here). All of them trumpeted the 3-7 percentage point increase in proficiency among the city’s charter students, compared with the 2-3 point increase among their counterparts in regular public schools. The consensus: Charters performed fantastically well this year.

    In fact, the NY Daily News asserted that the "clear lesson" from the data is that "public school administrators must gain the flexibility enjoyed by charter leaders," and "adopt [their] single-minded focus on achievement." For his part, Mayor Michael Bloomberg claimed that the scores are evidence that the city should expand its charter sector.

    All of this reflects a fundamental misunderstanding of how to interpret testing data, one that is frankly a little frightening to find among experienced reporters and elected officials.

    READ MORE

Pages

Subscribe to Accountability

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.