How Not To Improve New Teacher Evaluation Systems

One of the more interesting recurring education stories over the past couple of years has been the release of results from several states’ and districts’ new teacher evaluation systems, including those from New York, Indiana, Minneapolis, Michigan and Florida. In most of these instances, the primary focus has been on the distribution of teachers across ratings categories. Specifically, there seems to be a pattern emerging, in which the vast majority of teachers receive one of the higher ratings, whereas very few receive the lowest ratings.

This has prompted some advocates, and even some high-level officials, essentially to deem as failures the new systems, since their results suggest that the vast majority of teachers are “effective” or better. As I have written before, this issue cuts both ways. On the one hand, the results coming out of some states and districts seem problematic, and these systems may need adjustment. On the other hand, there is a danger here: States may respond by making rash, ill-advised changes in order to achieve “differentiation for the sake of differentiation,” and the changes may end up undermining the credibility and threatening the validity of the systems on which these states have spent so much time and money.

Granted, whether and how to alter new evaluations are difficult decisions, and there is no tried and true playbook. That said, New York Governor Andrew Cuomo’s proposals provide a stunning example of how not to approach these changes. To see why, let’s look at some sound general principles for improving teacher evaluation systems based on the first rounds of results, and how they compare with the New York approach.*

Preparing Effective Teachers For Every Community

Our guest authors today are Frank Hernandez, Corinne Mantle-Bromley and Benjamin Riley. Dr. Hernandez is the dean of the College of Education at the University of Texas of the Permian Basin, and previously served as a classroom teacher and school and district administrator for 12 years. Dr. Mantle-Bromley is dean of the University of Idaho’s College of Education and taught in rural Idaho prior to her work preparing teachers for diverse K-12 populations. Mr. Riley is the founder of Deans for Impact, a new organization composed of deans of colleges of education working together to transform educator preparation in the US. 

Students of color in the U.S., and those who live in rural communities, face unique challenges in receiving a high-quality education. All too often, new teachers have been inadequately prepared for these students’ specific needs. Perhaps just as often, their teachers do not look like them, and do not understand the communities in which these students live. Lacking an adequate preparation and the cultural sensitivities that come only from time and experience within a community, many of our nation’s teachers are thrust into an almost unimaginably challenging situation. We simply do not have enough well-prepared teachers of color, or teachers from rural communities, who can successfully navigate the complexities of these education ecosystems.

Some have described the lack of teachers of color and teachers who will serve in rural communities as a crisis of social justice. We agree. And, as the leaders of two colleges of education who prepare teachers who serve in these communities, we think the solution requires elevating the expectations for every program that prepares teachers and educators in this country.

The Increasing Academic Ability Of New York Teachers

For many years now, a common talking point in education circles has been that U.S. public school teachers are disproportionately drawn from the “bottom third” of college graduates, and that we have to “attract better candidates” in order to improve the distribution of teacher quality. We discussed the basis for this “bottom third” claim in this post, and I will not repeat the points here, except to summarize that “bottom third” teachers (based on SAT/ACT scores) were indeed somewhat overrepresented nationally, although the magnitudes of such differences vary by cohort and other characteristics.

A very recent article in the journal Educational Researcher addresses this issue head-on (a full working version of the article is available here). It is written by Hamilton Lankford, Susanna Loeb, Andrew McEachin, Luke Miller and James Wyckoff. The authors analyze SAT scores of New York State teachers over a 25 year period (between 1985 and 2009). Their main finding is that these SAT scores, after a long term decline, improved between 2000 and 2009 among all certified teachers, with the increases being especially large among incoming (new) teachers, and among teachers in high-poverty schools. For example, the proportion of incoming New York teachers whose SAT scores were in the top third has increased over 10 percentage points, while the proportion with scores in the bottom third has decreased by a similar amount (these figures define “top third” and “bottom third” in terms of New York State public school students who took the SAT between 1979 and 2008).

This is an important study that bears heavily on the current debate over improving the teacher labor supply, and there are few important points about it worth discussing briefly.

Update On Teacher Turnover In The U.S.

Every four years, the National Center for Education Statistics provides the public with the best available national estimates of teacher attrition and mobility. The estimates come from the Teacher Follow-Up Survey (TFS), which is a supplement to the Schools and Staffing Survey (SASS), a much larger national survey of teachers that is also conducted every four years. Put simply, the TFS is a sub-sample of SASS respondents, who are contacted the following year to find out if and where they are still teaching.

The conventional wisdom among many commentators, particularly those critical of test-based accountability and recent education reform, is that teacher attrition (teachers leaving the profession) and mobility (teachers switching schools) are on the rise. As discussed in a previous post, this was indeed the case, at least at the national level, between the 1991-92 and 2004-05 school years, but ceased to be true between 2004-05 and 2008-09, during which time attrition and mobility was basically flat. A few months ago, results from the latest administration of the TFS, which tracked teachers between 2011-12 and 2012-13, were released, and it’s worth taking a quick look at the findings.

As you can see in the graph below, the proportion of public school teachers who left the profession entirely (“leavers”), as well as the proportion who switched schools (“movers”), were again relatively flat between 2008-09 and 2012-13 (and the change is not statistically significant).

Is Teaching More Like Baseball Or Basketball?

** Republished here in the Washington Post

Earlier this year, a paper by Roderick I. Swaab and colleagues received considerable media attention (e.g., see here, here, and here). The research questioned the widely shared belief that bringing together the most talented individuals always produces the best result. The authors looked at various types of sports (e.g., player characteristics and behavior, team performance etc.), and were able to demonstrate that there is such thing as “too much talent," and that having too many superstars can hurt overall team performance, at least when the sport requires cooperation among team members.

My immediate questions after reading the paper were: Do these findings generalize outside the world of sports and, if so, what might be the implications for education? To my surprise, I did not find much commentary or analysis addressing them. I am sure not everybody saw the paper, but I also wonder if this absence might have something to do with how teaching is generally viewed: More like baseball (i.e., a more individualistic team sport) than, say, like basketball. But in our social side of education reform series, we have been discussing a wealth of compelling research suggesting that teaching is not individualistic at all, and that schools thrive on trusting relationships and cooperation, rather than competition and individual prowess.

So, if teaching is indeed more like basketball than like baseball, what are the implications of this study for strategies and policies aimed at identifying, developing and supporting teaching quality?

Multiple Measures And Singular Conclusions In A Twin City

A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.

The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show." These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers." Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers," could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.

So, while there are no clear-cut "right" or "wrong" answers here, let’s take a quick look at the data and what they might tell us.

The Great Teacher Evaluation Evaluation: New York Edition

A couple of weeks ago, the New York State Education Department (NYSED) released data from the first year of the state's new teacher and principal evaluation system (called the “Annual Professional Performance Review," or APPR). In what has become a familiar pattern, this prompted a wave of criticism from advocates, much of it focused on the proportion of teachers in the state to receive the lowest ratings.

To be clear, evaluation systems that produce non-credible results should be examined and improved, and that includes those that put implausible proportions of teachers in the highest and lowest categories. Much of the commentary surrounding this and other issues has been thoughtful and measured. As usual, though, there have been some oversimplified reactions, as exemplified by this piece on the APPR results from Students First NY (SFNY).

SFNY notes what it considers to be the low proportion of teachers rated “ineffective," and points out that there was more differentiation across rating categories for the state growth measure (worth 20 percent of teachers’ final scores), compared with the local “student learning” measure (20 percent) and the classroom observation components (60 percent). Based on this, they conclude that New York’s "state test is the only reliable measure of teacher performance" (they are actually talking about validity, not reliability, but we’ll let that go). Again, this argument is not representative of the commentary surrounding the APPR results, but let’s use it as a springboard for making a few points, most of which are not particularly original. (UPDATE: After publication of this post, SFNY changed the headline of their piece from "the only reliable measure of teacher performance" to "the most reliable measure of teacher performance.")

Research And Policy On Paying Teachers For Advanced Degrees

There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere.

A Quick Look At The ASA Statement On Value-Added

Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it's useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged.

Differences In DC Teacher Evaluation Ratings By School Poverty

In a previous post, I discussed simple data from the District of Columbia Public Schools (DCPS) on teacher turnover in high- versus lower-poverty schools. In that same report, which was issued by the D.C. Auditor and included, among other things, descriptive analyses by the excellent researchers from Mathematica, there is another very interesting table showing the evaluation ratings of DC teachers in 2010-11 by school poverty (and, indeed, DC officials deserve credit for making these kinds of data available to the public, as this is not the case in many other states).

DCPS’ well-known evaluation system (called IMPACT) varies between teachers in tested versus non-tested grades, but the final ratings are a weighted average of several components, including: the teaching and learning framework (classroom observations); commitment to the school community (attendance at meetings, mentoring, PD, etc.); schoolwide value-added; teacher-assessed student achievement data (local assessments); core professionalism (absences, etc.); and individual value-added (tested teachers only).

The table I want to discuss is on page 43 of the Auditor’s report, and it shows average IMPACT scores for each component and overall for teachers in high-poverty schools (80-100 percent free/reduced-price lunch), medium poverty schools (60-80 percent) and low-poverty schools (less than 60 percent). It is pasted below.