Comparing Teacher And Principal Evaluation Ratings

The District of Columbia Public Schools (DCPS) has recently released the first round of results from its new principal evaluation system. Like the system used for teachers, the principal ratings are based on a combination of test and non-test measures. And the two systems use the same final rating categories (highly effective, effective, minimally effective and ineffective).

It was perhaps inevitable that there would be comparisons of their results. In short, principal ratings were substantially lower, on average. Roughly half of them received one of the two lowest ratings (minimally effective or ineffective), compared with around 10 percent of teachers.

Some wondered whether this discrepancy by itself means that DC teachers perform better than principals. Of course not. It is difficult to compare the performance of teachers versus that of principals, but it’s unsupportable to imply that we can get a sense of this by comparing the final rating distributions from two evaluation systems.

Thoughts On Using Value Added, And Picking A Model, To Assess Teacher Performance

Our guest author today is Dan Goldhaber, Director of the Center for Education Data & Research and a Research Professor in Interdisciplinary Arts and Sciences at the University of Washington Bothell.

Let me begin with a disclosure: I am an advocate of experimenting with using value added, where possible, as part of a more comprehensive system of teacher evaluation. The reasons are pretty simple (though articulated in more detail in a brief, which you can read here). The most important reason is that value-added information about teachers appears to be a better predictor of future success in the classroom than other measures we currently use. This is perhaps not surprising when it comes to test scores, certainly an important measure of what students are getting out of schools, but research also shows that value added predicts very long run outcomes, such as college going and labor market earnings. Shouldn’t we be using valuable information about likely future performance when making high-stakes personnel decisions? 

It almost goes without saying, but it’s still worth emphasizing, that it is impossible to avoid making high-stakes decisions. Policies that explicitly link evaluations to outcomes such as compensation and tenure are new, but even in the absence of such policies that are high-stakes for teachers, the stakes are high for students, because some of them are stuck with ineffective teachers when evaluation systems suggest, as is the case today, that nearly all teachers are effective.

Are There Low Performing Schools With High Performing Students?

I write often (probably too often) about the difference between measures of school performance and student performance, usually in the context of school rating systems. The basic idea is that schools cannot control the students they serve, and so absolute performance measures, such as proficiency rates, are telling you more about the students a school or district serves than how effective it is in improving outcomes (which is better-captured by growth-oriented indicators).

Recently, I was asked a simple question: Can a school with very high absolute performance levels ever actually be considered a “bad school?"

This is a good question.

Underlying Issues In The DC Test Score Controversy

In the Washington Post, Emma Brown reports on a behind the scenes decision about how to score last year’s new, more difficult tests in the District of Columbia Public Schools (DCPS) and the District's charter schools.

To make a long story short, the choice faced by the Office of the State Superintendent of Education, or OSSE, which oversees testing in the District, was about how to convert test scores into proficiency rates. The first option, put simply, was to convert them such that the proficiency bar was more “aligned” with the Common Core, thus resulting in lower aggregate proficiency rates in math, compared with last year’s (in other states, such as Kentucky and New York, rates declined markedly). The second option was to score the tests while "holding constant" the difficulty of the questions, in order to facilitate comparisons of aggregate rates with those from previous years.

OSSE chose the latter option (according to some, in a manner that was insufficiently transparent). The end result was a modest increase in proficiency rates (which DC officials absurdly called “historic”).

Selection Versus Program Effects In Teacher Prep Value-Added

There is currently a push to evaluate teacher preparation programs based in part on the value-added of their graduates. Predictably, this is a highly controversial issue, and the research supporting it is, to be charitable, still underdeveloped. At present, the evidence suggests that the differences in effectiveness between teachers trained by different prep programs may not be particularly large (see here, here, and here), though there may be exceptions (see this paper).

In the meantime, there’s an interesting little conflict underlying the debate about measuring preparation programs’ effectiveness, one that’s worth pointing out. For the purposes of this discussion, let’s put aside the very important issue of whether the models are able to account fully for where teaching candidates end up working (i.e., bias in the estimates based on school assignments/preferences), as well as (valid) concerns about judging teachers and preparation programs based solely on testing outcomes. All that aside, any assessment of preparation programs using the test-based effectiveness of their graduates is picking up on two separate factors: How well they prepare their candidates; and who applies to their programs in the first place.

In other words, programs that attract and enroll highly talented candidates might look good even if they don’t do a particularly good job preparing teachers for their eventual assignments. But does that really matter?

The Promise Of The Common Core

In recent months, the Common Core has come under increasing criticism from a number of different quarters.

An op-ed in the New York Times’ Week in Review is emblematic of the best of this disapproving sentiment. Yet even it mixes together fundamental misconceptions about the entire Common Core project with legitimate issues of inadequate preparation for teachers and students and poor implementation by state education departments and districts. The Common Core is described as a “radical curriculum” that was introduced with “hardly any public discussion." We are told that it is a “one size fits all” approach, built upon a standardized script that teachers must use for instruction. Finally, it is suggested that the Common Core is a “game that has been so prearranged that many, if not most, of the players will fail."

This is the Common Core seen through the prism of a fun house mirror. In truth, the Common Core is neither “radical” nor a “curriculum," but a set of grade level performance standards for student achievement in the core academic disciplines of English Language Arts and Mathematics.* Indeed, one of the more telling criticisms of the implementation of the Common Core is that in all too many states, districts and schools, these standards have not been developed into curricula which teachers could readily use in their classrooms.

A Path To Diversifying The Teaching Workforce

Our guest author today is Jose Vilson, a math educator, writer, and activist in a New York City public school. You can find more of his writing at http://thejosevilson.com and his book, This Is Not A Test, will be released in the spring of 2014.

Travis Bristol’s article on bringing more black men to the classroom has sparked a plethora of conversation around the roles of educators in our school system. If we look at the national educational landscape, educators are still treated with admiration, but our government has yet to see fit to create conditions in schools that promote truly effective teaching and learning. In fact, successful teaching in otherwise struggling environments happens in spite and not because of the policies of our current school systems.

Even as superintendents see fit to close schools that house large populations of teachers and students of color, we must observe the roles that educators of color play in their schools, whether they consider themselves “loners” or “groupers," as Bristol describes in the aforementioned article. When the Brown vs. Board of Education decision came down in 1954, districts across the nation were determined to keep as many white educators employed as possible. While integration plays a role in assuring equitable conditions for all children and exposes them to other peoples, segregation’s silver lining was that Black educators taught Black children Black history. Racial identification plays a role in self-confidence, and having immediate role models for our children of color matters for achievement to this day.

On Education Polls And Confirmation Bias

Our guest author today is Morgan Polikoff, Assistant Professor in the Rossier School of Education at the University of Southern California. 

A few weeks back, education policy wonks were hit with a set of opinion polls about education policy. The two most divergent of these polls were the Phi Delta Kappan/Gallup poll and the Associated Press/NORC poll.

This week a California poll conducted by Policy Analysis for California Education (PACE) and the USC Rossier School of Education (where I am an assistant professor) was released. The PACE/USC Rossier poll addresses many of the same issues as those from the PDK and AP, and I believe the three polls together can provide some valuable lessons about the education reform debate, the interpretation of poll results, and the state of popular opinion about key policy issues.

In general, the results as a whole indicate that parents and the public hold rather nuanced views on testing and evaluation.

Calling Black Men To The Blackboard

Our guest author today is Travis Bristol, former high school English teacher in New York City public schools, who is currently a clinical teacher educator with the Boston Teacher Residency program, as well as a fifth-year Ph.D. candidate at Teachers College, Columbia University. His research interests focus on the intersection of gender and race in organizations. Travis is a 2013 National Academy of Education/Spencer Dissertation Fellow.

W.E.B. Du Bois, the preeminent American scholar, suggested that the problem of the twentieth-century is the problem of the color-line. Without question, the problem of the 21st century continues to be the “color-line," which is to say race. And so it is understandable why Cabinet members in the Obama administration continue to address the race question head-on, through policies that attempt to decrease systemic disparities between Latino and Black Americans when compared to White Americans.

Most recently, in August 2013, U.S. Attorney General Eric Holder announced the Justice Department’s decision to reduce federal mandatory drug sentencing regulations.  Holder called “shameful” the fact that “black male offenders have received sentences nearly 20 percent longer than those imposed on white males convicted of similar crimes." Attempts, such as Holder's, to reform the criminal justice system appear to be an acknowledgment that institutionalized racism influences how Blacks and Whites are sentenced.

The Great Proficiency Debate

A couple of weeks ago, Mike Petrilli of the Fordham Institute made the case that absolute proficiency rates should not be used as measures of school effectiveness, as they are heavily dependent on where students “start out” upon entry to the school. A few days later, Fordham president Checker Finn offered a defense of proficiency rates, noting that how much students know is substantively important, and associated with meaningful outcomes later in life.

They’re both correct. This is not a debate about whether proficiency rates are at all useful (by the way, I don't read Petrilli as saying that). It’s about how they should be used and how they should not.

Let’s keep this simple. Here is a quick, highly simplified list of how I would recommend interpreting and using absolute proficiency rates, and how I would avoid using them.