Are There Low Performing Schools With High Performing Students?

I write often (probably too often) about the difference between measures of school performance and student performance, usually in the context of school rating systems. The basic idea is that schools cannot control the students they serve, and so absolute performance measures, such as proficiency rates, are telling you more about the students a school or district serves than how effective it is in improving outcomes (which is better-captured by growth-oriented indicators).

Recently, I was asked a simple question: Can a school with very high absolute performance levels ever actually be considered a “bad school?"

This is a good question.

For one thing, of course, tests and graduation rates are imperfect measures (especially given how they're currently used), and so they may be missing a lot. This is certainly the case, but let’s put it aside for the purposes of this discussion.

Say we have an elementary school, located in an affluent neighborhood, whose students score very highly, on average. These kids entered the school way ahead of their peers in poorer areas. During their 5-6 years at this school, each cohort of students maintains a very high performance level, but it’s mostly because of where they started out – they actually make progress that is far lower than that of similar students in comparable schools. Due to the design of most states’ rating systems, this school would probably receive a fairly high grade, or would at least avoid receiving low grade, because the systems tend to weight absolute proficiency measures quite heavily.

Is this wrong? Is this a “low-performing school?"

By the growth-oriented test-based metrics commonly employed in education policy today, including those we use for teachers, yes, it is. These students are “losing ground” relative to similar peers elsewhere (though keep in mind that the estimates from many growth models are relative, not absolute). Sure, virtually all of them will graduate and most will eventually attend four-year college, but that may be largely thanks to their backgrounds – i.e., these outcomes will come about despite, rather than because of, their school’s effectiveness.

That said, yes –I suppose I would call this a “low performing school."

But I would offer one very important clarification here – this may be a “low-performing school” by a test-based, growth-oriented standard, but that does not necessarily mean it should be subject to costly interventions, whether high stakes (e.g., closure or turnaround) or lower stakes (e.g., additional funding). This, put simply, is because resources are limited, and they are, in my view, best allocated to schools serving students who are most in need of help. This is not the case in our hypothetical school, where students score highly but don’t make progress.

The school should, however, be formally “encouraged” to improve, perhaps via a low-cost plan by which its performance is subject to special monitoring and receives guidance on possible strategies to boost performance.

(Side note: Depending on the availability of resources and the performance of other schools, there may be cases in which schools with strong absolute performance do so poorly on growth metrics that more drastic interventions could be appropriate. But I’m not sure where that line should be drawn.)

One final note that bears mentioning here is that, in most states’ rating systems, a school with high absolute performance and low growth (e.g., our hypothetical school) has much less risk of receiving a low rating than a school in the opposite situation – low absolute performance and strong growth. Again, this reflects the design of these systems, in which absolute performance plays a more dominant role than growth in determining a school’s rating.

- Matt Di Carlo

Permalink

But why is it a low performing school vs continuing to challenge its students to a higher level than required by the district or state? Seems this is the school's problem not the students...

Permalink

Low performing or high performing based on what? An arbitrary test? An invalid growth measure? An inappropriate letter grade based on faulty criteria? Many schools have different missions based on their students and fulfill those missions admirably but don't fit into a one-size fits all performance measure. In addition, if a student is scoring at the 99th percentile, what "growth" can they show? Lots of questions, I know, but I ask these questions frequently and I can't get very good answers.

Permalink

Thank you for this. My children went to schools with affluent students. The schools' test scores were always quite high. I thought their schools were terrible. When my second grader was shown to be reading at the 6th grade level, they patted him on the head and that was it. No interest in helping him find ways to channel his abilities or do something other than sit and listen to others who were struggling (or at least struggling relative to him). In fact, he was occasionally accused of cheating.

Permalink

How could this happen? Quite easily, because of ceiling effects.

Ceiling effects is where a test does not have (enough) hard enough questions to differentiate between high performing students, and so the students top out the test. (Imagine 6th graders taking a 2nd grade math test. They'd all do quite well, and the test would not be very useful.) It's like a bunch of really tall people bumping their heard on a too-low ceiling, and so you can't tell how tall they really are.

So, if the test tops out, then it cannot show either student performance or student learning for high performing students. And a school with many of those students would look at lot worse than it actually is.

Permalink

You aren't describing a low-performing school, but a low growth school.

And ceaolaf is correct about the rest. Odd you wouldn't address that.

Even odder you wouldn't describe a more relevant scenario; a high school that has high achieving and low achieving students.