Senate's Harkin-Enzi ESEA Plan Is A Step Sideways

Our guest authors today are Morgan Polikoff and Andrew McEachin. Morgan is Assistant Professor in the Rossier School of Education at the University of Southern California. Andrew is an Institute of Education Science postdoctoral fellow at the University of Virginia.

By now, it is painfully clear that Congress will not be revising the Elementary and Secondary Education Act (ESEA) before the November elections. And with the new ESEA waivers, who knows when the revision will happen? Congress, however, seems to have some ideas about what next-generation accountability should look like, so we thought it might be useful to examine one leading proposal and see what the likely results would be.

The proposal we refer to is the Harkin-Enzi plan, available here for review. Briefly, the plan identifies 15 percent of schools as targets of intervention, classified in three groups. First are the persistently low-achieving schools (PLAS); these are the 5 percent of schools that are the lowest performers, based on achievement level or a combination of level and growth. Next are the achievement gap schools (AGS); these are the 5 percent of schools with the largest achievement gaps between any two subgroups. Last are the lowest subgroup achievement schools (LSAS); these are the 5 percent of schools with the lowest achievement for any significant subgroup.

The goal of this proposal is both to reduce the number of schools that are identified as low-performing and to create a new operational definition of consistently low-performing schools. To that end, we wanted to know what kinds of schools these groups would target and how stable the classifications would be over time.

Is California's "API Growth" A Good Measure Of School Performance?

California calls its “Academic Performance Index” (API) the “cornerstone” of its accountability system. The API is calculated as a weighted average of the proportions of students meeting proficiency and other cutoffs on the state exams.

It is a high-stakes measure. “Growth” in schools’ API scores determines whether they meet federal AYP requirements, and it is also important in the state’s own accountability regime. In addition, toward the middle of last month, the California Charter Schools Association called for the closing of ten charter schools based in part on their (three-year) API “growth” rates.

Putting aside the question of whether the API is a valid measure of student performance in any given year, using year-to-year changes in API scores in high-stakes decisions is highly problematic. The API is cross-sectional measure – it doesn’t follow students over time – and so one must assume that year-to-year changes in a school’s index do not reflect a shift in demographics or other characteristics of the cohorts of students taking the tests. Moreover, even if the changes in API scores do in fact reflect “real” progress, they do not account for all the factors outside of schools’ control that might affect performance, such as funding and differences in students’ backgrounds (see here and here, or this Mathematica paper, for more on these issues).

Better data are needed to test these assumptions directly, but we might get some idea of whether changes in schools’ API are good measures of school performance by testing how stable they are over time.