A Research-Based Case For Florida's Education Reforms

Advocates of the so-called “Florida Formula," a package of market-based reforms enacted throughout the 1990s and 2000s, some of which are now spreading rapidly in other states, traveled to Michigan this week to make their case to the state’s lawmakers, with particular emphasis on Florida's school grading system. In addition to arguments about accessibility and parental involvement, their empirical (i.e., test-based) evidence consisted largely of the standard, invalid claims that cross-sectional NAEP increases prove the reforms’ effectiveness, along with a bonus appearance of the argument that since Florida starting grading schools, the grades have improved, even though this is largely (and demonstrably) a result of changes in the formula.

As mentioned in a previous post, I continue to be perplexed at advocates’ insistence on using this "evidence," even though there is a decent amount of actual rigorous policy research available, much of it positive.

So, I thought it would be fun, though slightly strange, for me to try on my market-based reformer cap, and see what it would look like if this kind of testimony about the Florida reforms was actually research-based (at least the test-based evidence). Here’s a very rough outline of what I came up with:

  • One of the big conundrums in social policy is that lawmakers and the public demand evidence that policies work before supporting them, but also that you have to try policies before you know whether they work. Florida was an earlier adopter of some of the education reforms spreading across the nation today. As a result, they have been around long enough to be subject to some strong policy evaluation, which might inform the rest of the nation, including your state.
  • The evidence thus far, though tentative, is encouraging. Specifically:

1. There is some indication (also here, here and here) that the A-F grading system, as part of a larger accountability system, led to modest but statistically discernible improvements in the performance of the small number of schools receiving the lowest grades. These improvements do not appear to be entirely the result of undesirable “gaming” behaviors, such as teaching to the test;  (but, as is often the case in test-based accountability, such behaviors may have played some role). This is consistent with evidence outside of Florida, and at the national level;

2. Florida’s charter schools, like those in most other states, have not been shown superior to comparable regular public schools. Their estimated test-based impacts vary widely by school (also see here and here), though one study suggests that charter high schools had a positive impact on graduation rates, and there is some research suggesting that the state’s tuition tax credit ("neovoucher") program led to small but noticeable improvements in nearby public schools’ performance. This too squares with evidence elsewhere;

3. It is still very early, the first pieces of evidence about the impact of Florida’s policy of retaining third graders who do not score sufficiently high on reading tests suggest that the policy may also be having a positive impact;

  • Of course, policymakers considering these interventions for their own states should bear in mind that the estimated impacts of these policies, may very well be different when tried outside of Florida.
  • In addition, the estimated effects tend to be quite modest. There are no silver bullets. Implementation is difficult, requires investment of financial and human resources, and real improvement is slow and sustained.
  • That said, on the whole, the policies for which evidence is available have shown promise. We believe that these reforms, done correctly, can also work in other states.
Reading this outline over, I must confess that I’m skeptical it would be received enthusiastically. No doubt state and local leaders live under a constant barrage of policy advocates offering them miracle results for minimal costs, and some of them might not be particularly impressed by what amounts to an pseudo-academic literature review followed by a promise of slow, rather modest benefits.

I do, however, believe that this is the kind of honest presentation they should be hearing, and that advocates on all “sides” of the education policy debate, if they’re fortunate enough to be asked to play this role, should be providing.

- Matt Di Carlo 

Permalink

How are the reforms you mentioned (other than vouchers) "market-based"?

Permalink

What about the class size that came about in 2002 amendment, why do they never mention it and can't we attribute any improvement to that?

Jeb Bush and Gary Chartrand the chair of the state board of ed regularly attack it but its implementation roughly corresponds with the improvements they tout.

Permalink

Good point by Chris- this in-house piece ignores (like Jeb Bush) the class size mandates. Any evaluation that does not take that into consideration (Di Carlo ignores it too) is suspect and invalid. You cannot claim the reason "A" is working is because of "B" if there is a "C" to consider. This is basic. Class size ain't no market based reform. Also- playing into the game that these test scores actually matter is also problematic. Very odd post.

Permalink

Part Two: Not that I think test scores matter but to play Di Carlo's game - in the last 5-6 years we have seen widening gaps and numbers below national public school averages (NAEP) in FL. With graduation rates for Black males below Mississippi and widening gaps between poor and middle class students on many levels - Di Carlo misses the mark. FL rigs cut scores to get results they want. Di Carlo also makes a fanatical leap from 3rd grade retention to "positive" results. Huh? In FL 25% of students graduate with waivers ( don't have to meet many requirements) and can buy credit for about $300 a semester. Apparently unfamiliar with how things really work in the state, Di Carlo's mistakes and assumptions results in a failed attempt.

Permalink

The grading system is really a proxy for what schools actually did in response to their grades. So, it is misleading to tell states this grading approach is something that can work without also making clear what effective policies it produced. If the Urban Institute study is right, for example, this is where the focus should go: " We find that schools receiving an “F” grade are more likely to focus on low- performing students, lengthen the amount of time devoted to instruction, adopt different ways to organize the day and learning environment of the students and teachers, increase resources available to teachers, and decrease principal control. . ." An of course, yes, yes, yes, this is all an associational not causal connection. But, hey, if these associational observations about results are worth making, then it is also noteworthy (as the Institute study acknowledges) that these "effective" policies also cost money. Grading systems by themselves are probably a lot cheaper.