The Accessibility Conundrum In Accountability Systems

One of the major considerations in designing accountability policy, whether in education or other fields, is what you might call accessibility. That is, both the indicators used to construct measures and how they are calculated should be reasonably easy for stakeholders to understand, particularly if the measures are used in high-stakes decisions.

This important consideration also generates great tension. For example, complaints that Florida’s school rating system is “too complicated” have prompted legislators to make changes over the years. Similarly, other tools – such as procedures for scoring and establishing cut points for standardized tests, and particularly the use of value-added models – are routinely criticized as too complex for educators and other stakeholders to understand. There is an implicit argument underlying these complaints: If people can’t understand a measure, it should not be used to hold them accountable for their work. Supporters of using these complex accountability measures, on the other hand, contend that it’s more important for the measures to be “accurate” than easy to understand.

I personally am a bit torn. Given the extreme importance of accountability systems’ credibility among those subject to them, not to mention the fact that performance evaluations must transmit accessible and useful information in order to generate improvements, there is no doubt that overly complex measures can pose a serious problem for accountability systems. It might be difficult for practitioners to adjust their practice based on a measure if they don't understand that measure, and/or if they are unconvinced that the measure is transmitting meaningful information. And yet, the fact remains that measuring the performance of schools and individuals is extremely difficult, and simplistic measures are, more often than not, inadequate for these purposes.

Changes in proficiency rates are a good example of this tension. This measure could not be easier to understand – every year, a certain percentage of tested students in a school or district scores above the “proficient” line, and these percentages can increase, decrease, or stay flat between years. Most everyone understands how this works, and that is a tremendous asset. It provides a “common language” of test-based effectiveness, one that is accessible even to those who don’t spend their days working in the education field.

The problem is that the rate changes are awful measures. For several reasons, which I will not repeat but you can read about here, they tell you virtually nothing about actual school performance in any given year. And the fact that they are so simple and easy to understand is a major symptom of why they fall short. Any attempt to isolate schools’ contribution to student growth requires one to follow students over time (which typical proficiency rate changes don’t do), and to control for factors, such as prior achievement and other student characteristics, that are known to influence testing outcomes but are not under schools’ control.

In other words, any measure that could provide an actual signal about test-based performance, however imperfect, must necessarily be complicated. You just can’t approximate school performance using subtraction alone. Now, granted, no measure is ever perfectly accurate, and there’s much more to accountability systems than using the most sophisticated indicators, but it makes no sense to settle for measures that are so obviously not up to the job because they are easier for people to understand.

(Important side note: There is a flip side to this situation, namely that some advocates of complex performance measures, such as value-added, tend to understate their complexity, especially when it comes to the need to interpret the estimates appropriately, and with caution. And supporting the misuse of a measure that one doesn't understand is no more defensible than rejecting that measure solely because one doesn't understand it.)

To be clear, I am most certainly not arguing that accessibility should be ignored. As stated above, it matters. And it matters a lot. Many states and districts need to do a much better job of explaining things to educators and other stakeholders (including parents and taxpayers). They need to pay closer attention to the details of how measures are used, and the decisions based upon them. And there are situations in which measures and systems can be unnecessarily complex, and/or in which increased complexity is not worth the explanatory power it offers.

What I am arguing, instead, is that the sacrifice of complexity for the sake of accessibility can very easily go too far, and that it often does. For instance, most teachers would require extensive training to understand the technical workings of a value-added model. But a general overview of how the models work is hardly beyond the grasp of most professional educators. And, in turn, there is much that educators could explain about designing accountability systems to help strengthen teaching and learning, not just measure them.

In short, the design and implementation of a useful accountability system, whether for schools or teachers, is necessarily difficult and complex, and balancing accessibility with “accuracy” will require more cooperation than has been evident in most places. But it seems to me that investing time and resources in helping educators understand complex indicators, and prioritizing their ideas as to how those indicators should be used, is better than continuing to employ bad measures to provide information that they can't possibly provide.

- Matt Di Carlo

Issues Areas