Comparing Teacher And Principal Evaluation Ratings

The District of Columbia Public Schools (DCPS) has recently released the first round of results from its new principal evaluation system. Like the system used for teachers, the principal ratings are based on a combination of test and non-test measures. And the two systems use the same final rating categories (highly effective, effective, minimally effective and ineffective).

It was perhaps inevitable that there would be comparisons of their results. In short, principal ratings were substantially lower, on average. Roughly half of them received one of the two lowest ratings (minimally effective or ineffective), compared with around 10 percent of teachers.

Some wondered whether this discrepancy by itself means that DC teachers perform better than principals. Of course not. It is difficult to compare the performance of teachers versus that of principals, but it’s unsupportable to imply that we can get a sense of this by comparing the final rating distributions from two evaluation systems.

These are different, completely untested systems measuring different jobs. Moreover, the principal evaluations (appropriately) employ different measures than those used for teachers, and they are combined in a different way. For instance, principals can only be rated effective if they meet their proficiency targets in either math or reading and "make gains" in the other. (Side note: Cross-sectional proficiency gains are a terrible measure that, in my view, should not be used in these systems.) 

Yet DCPS human capital director Jason Kamras, commenting on the comparison between teacher and principal ratings, made the following statement:

I don’t think it’s surprising that we see higher ratings for teachers. We’ve invested a ton of resources, energy and money into developing folks and getting the right folks and holding on to the great folks we have.
Let’s put aside the possible implication here that DCPS hasn’t been as successful in recruiting and retaining good principals as they have with teachers, or that the teacher evaluations are driven in part by classroom observations conducted by the very same principals that are receiving low ratings.

All that aside, Mr. Kamras is a smart guy. He played a role in developing both of these systems. Surely he understands that these ratings comparisons mean little or nothing.

My guess is that he was taking an opportunity to claim that the DC reforms have been successful, at least for teachers (maybe so, but this comparison can't tell us). He may also be trying to boost morale by complimenting the teachers in the District. I suppose that’s part of his job.

On the other hand, this is illustrative of a wider issue when it comes to teacher evaluation systems (as well as school rating systems): The need to avoid taking the ratings at face value (see this post for a previous example of this happening in DC).

These are brand new systems; even the teacher evaluations are only a few years old. It is a borderline absurd to simply assume that they are “accurate” and draw grand, sweeping conclusions based on the distribution of their results, particularly when we're talking about comparing two different systems designed for two different groups of employees.

- Matt Di Carlo

Permalink

This is kinda strange but it's true it isn't surprising. Teachers do often get a lot more attention. However, aren't most principals teachers first?