The Deafening Silence Of Unstated Assumptions
Here’s a thought experiment. Let’s say we were magically granted the ability to perfectly design our public education system. In other words, we were somehow given the knowledge of the most effective policies and how to implement them, and we put everything in place. How quickly would schools improve? Where would we be after 20 years of having the best possible policies in place? What about after 50 years?
I suspect there is much disagreement here, and that answers would vary widely. But, since there is a tendency in education policy to shy away from even talking realistically about expectations, we may never really know. We sometimes operate as though we expect immediate gratification - quick gains, every single year. When schools or districts don't achieve gains, even over a short period of time, they are subject to being labeled as failures.
Without question, we need to set and maintain high expectations, and no school or district should ever cease trying to improve. Yet, in the context of serious policy discussions, the failure to even discuss expectations in a realistic manner hinders our ability to interpret and talk about evidence, as it often means that we have no productive standard by which to judge our progress or the effects of the policies we try.
For example, high-quality evaluations of charter schools , when they reveal any differences at all, tend to show that effects, whether positive or negative, are rather small. But maybe those small effects are not so small in the bigger picture, one which acknowledges that any one type of education policy should not be expected to generate huge, rapid gains.*
Similarly, everyone had an opinion about the latest round of NAEP results. Many said that the increases in math and reading scores were evidence of success, while others cried failure. These divergent views reflect unstated, underlying disagreements about what constitutes sufficiently rapid progress. As with charter schools and other interventions, if you think that we should expect large, quick increases in test score outcomes, then you’re likely to view the NAEP results poorly. If, on the other hand, you believe that educational improvement on such a large scale is a long, slow march, you might think NAEP shows that we’re back on track.
In other words, you can’t really interpret the meaning of any one piece of evidence if you don’t have a handle on what to expect. And you can’t really have a productive discussion if everyone is operating on different, unstated premises as to how it should be interpreted. This goes for not only test scores, but any other metric.
Nobody knows the “correct” standard – we can’t say for sure what would happen if we had our magic wand and did everything perfectly. But that doesn’t mean that we should simply assume that improvement must be dramatic in order to be considered meaningful. Put simply, until we start having a rational discussion about where we want to go and when we should expect to arrive, it will be very difficult to assess whether we're making good time.
- Matt Di Carlo
* Another take on how to interpret effect sizes is that even small changes might generate large improvements in outcomes such as economic growth.