A Few Additional Points About The IMPACT Study

The recently released study of IMPACT, the teacher evaluation system in the District of Columbia Public Schools (DCPS), has garnered a great deal of attention over the past couple of months (see our post here).

Much of the commentary from the system’s opponents was predictably (and unfairly) dismissive, but I’d like to quickly discuss the reaction from supporters. Some took the opportunity to make grand proclamations about how “IMPACT is working," and there was a lot of back and forth about the need to ensure that various states’ evaluations are as “rigorous” as IMPACT (as well as skepticism as to whether this is the case).

The claim that this study shows that “IMPACT is working” is somewhat misleading, and the idea that states should now rush to replicate IMPACT is misguided. It also misses the important points about the study and what we can learn from its results.

ESEA Waivers And The Perpetuation Of Poor Educational Measurement

Some of the best research out there is a product not of sophisticated statistical methods or complex research designs, but rather of painstaking manual data collection. A good example is a recent paper by Morgan Polikoff, Andrew McEachin, Stephani Wrabel and Matthew Duque, which was published in the latest issue of the journal Educational Researcher.

Polikoff and his colleagues performed a task that makes most of the rest of us cringe: They read and coded every one of the over 40 state applications for ESEA flexibility, or “waivers." The end product is a simple but highly useful presentation of the measures states are using to identify “priority” (low-performing) and “focus” (schools "contributing to achievement gaps") schools. The results are disturbing to anyone who believes that strong measurement should guide educational decisions.

There's plenty of great data and discussion in the paper, but consider just one central finding: How states are identifying priority (i.e., lowest-performing) schools at the elementary level (the measures are of course a bit different for secondary schools).

Can Knowledge Level The Learning Field For Children?

** Reprinted here in the Core Knowledge Blog

How much do preschoolers from disadvantaged and more affluent backgrounds know about the world and why does that matter? One recent study by Tanya Kaefer (Lakehead University) Susan B. Neuman (New York University) and Ashley M. Pinkham (University of Michigan) provides some answers.

The researchers randomly selected children from preschool classrooms in two sites, one serving kids from disadvantaged backgrounds, the other serving middle-class kids. They then set about to answer three questions:

A Quick Look At The DC Charter School Rating System

Having taken a look at several states’ school rating systems  (see our posts on the systems in IN, OH, FL and CO), I thought it might be interesting to examine a system used by a group of charter schools – starting with the system used by charters in the District of Columbia. This is the third year the DC charter school board has released the ratings.

For elementary and middle schools (upon which I will focus in this post*), the DC Performance Management Framework (PMF) is a weighted index composed of: 40 percent absolute performance; 40 percent growth; and 20 percent what they call “leading indicators” (a more detailed description of this formula can be found in the second footnote).** The index scores are then sorted into one of three tiers, with Tier 1 being the highest, and Tier 3 the lowest.

So, these particular ratings weight absolute performance – i.e., how highly students score on tests – a bit less heavily than do most states that have devised their own systems, and they grant slightly more importance to growth and alternative measures. We might therefore expect to find a somewhat weaker relationship between PMF scores and student characteristics such as free/reduced price lunch eligibility (FRL), as these charters are judged less predominantly on the students they serve. Let’s take a quick look.

The Wrong Way To Publish Teacher Prep Value-Added Scores

As discussed in a prior post, the research on applying value-added to teacher prep programs is pretty much still in its infancy. Even just a couple of years of would go a long way toward at least partially addressing the many open questions in this area (including, by the way, the evidence suggesting that differences between programs may not be meaningfully large).

Nevertheless, a few states have decided to plow ahead and begin publishing value-added estimates for their teacher preparation programs. Tennessee, which seems to enjoy being first -- their Race to the Top program is, a little ridiculously, called “First to the Top” -- was ahead of the pack. They have once again published ratings for the few dozen teacher preparation programs that operate within the state. As mentioned in my post, if states are going to do this (and, as I said, my personal opinion is that it would be best to wait), it is absolutely essential that the data be presented along with thorough explanations of how to interpret and use them.

Tennessee fails to meet this standard. 

Words Reflect Knowledge

I was fascinated when I started to read about the work of Betty Hart and Todd Risley and the early language differences between children growing up in different socioeconomic circumstances. But it took me a while to realize that we care about words primarily because of what words indicate about knowledge. This is important because it means that we must focus on teaching children about a wide range of interesting “stuff” – not just vocabulary for vocabulary’s sake. So, if words are the tip of the iceberg, what lies underneath? This metaphor inspired me to create the short animation below. Check it out!

The Word Gap

** Reprinted here in the Washington Post

It is now well established that children’s oral language development is crucial to their academic success, with the documentation of profound differences in word learning and the acquisition of content knowledge between children living in poverty and those from more economically advantaged homes. By the time they enter school, children from advantaged backgrounds may know as many as 15,000 more words than their less affluent peers. This early language gap sets children up to be at risk for other all too familiar gaps, such as the gaps in high school graduation, arrest and incarceration, post-secondary education, and lifetime earnings. So, what can we do to prevent this “early catastrophe”?

If a child suffers from malnutrition, simply giving him/her more food might not be sufficient to alleviate the problem. A better approach would be to figure out which specific foods and supplements best provide the vitamins and nutrients that are needed, and then deliver these to the child. Recent press coverage on the “word gap," spurred by initiatives such as Too Small to Fail and Thirty Million Words, suffers from a similar failing.

Don’t get me wrong, the initiatives themselves are hugely important and have done a truly commendable job of focusing public attention on a chronic and chronically overlooked problem. It’s just that the messages that have, thus far, made their way forward are predominantly about quantity – i.e., exposing children to more words and more talk – paying comparatively less attention to qualitative aspects, such as the nature and especially the content of adult-child interactions.

Getting Teacher Evaluation Right

Linda Darling-Hammond’s new book, Getting Teacher Evaluation Right, is a detailed, practical guide about how to improve the teaching profession. It leverages the best research and best practices, offering actionable, illustrated steps to getting teacher evaluation right, with rich examples from the U.S. and abroad.

Here I offer a summary of the book’s main arguments and conclude with a couple of broad questions prompted by the book. But, before I delve into the details, here’s my quick take on Darling-Hammond’s overall stance.

We are at a crossroads in education; two paths lay before us. The first seems shorter, easier and more straightforward. The second seems long, winding and difficult. The big problem is that the first path does not really lead to where we need to go; in fact, it is taking us in the opposite direction. So, despite appearances, more steady progress will be made if we take the more difficult route. This book is a guide on how to get teacher evaluation right, not how to do it quickly or with minimal effort. So, in a way, the big message or take away is: There are no shortcuts.

Innovating To Strengthen Youth Employment

Our guest author today is Stan Litow, Vice President of Corporate Citizenship and Corporate Affairs at IBM, President of the IBM Foundation, and a member of the Shanker Institute’s board of directors. This essay was originally published in innovations, an MIT press journal.

The financial crisis of 2008 exposed serious weaknesses in the world’s economic infrastructure. As a former aide to a mayor of New York and as deputy chancellor of the New York City Public Schools (the largest public school system in the United States), my chief concern—and a significant concern to IBM and other companies interested in global economic stability—has been the impact of global economic forces on youth employment.

Across the United States and around the world, youth unemployment is a staggering problem, and one that is difficult to gauge with precision. One factor that makes it difficult to judge accurately is that many members of the youth population have yet to enter the workforce, making it hard to count those who are unable to get jobs. What we do know is that the scope of the problem is overwhelming. Youth unemployment in countries such as Greece and Spain is estimated at over 50 percent, while in the United States the rate may be 20 percent, 30 percent, or higher in some cities and states. Why is this problem so daunting? Why does it persist? And, most important, how can communities, educators, and employers work together to address it?

Incentives And Behavior In DC's Teacher Evaluation System

A new working paper, published by the National Bureau of Economic Research, is the first high quality assessment of one of the new teacher evaluation systems sweeping across the nation. The study, by Thomas Dee and James Wyckoff, both highly respected economists, focuses on the first three years of IMPACT, the evaluation system put into place in the District of Columbia Public Schools in 2009.

Under IMPACT, each teacher receives a point total based on a combination of test-based and non-test-based measures (the formula varies between teachers who are and are not in tested grades/subjects). These point totals are then sorted into one of four categories – highly effective, effective, minimally effective and ineffective. Teachers who receive a highly effective (HE) rating are eligible for salary increases, whereas teachers rated ineffective are dismissed immediately and those receiving minimally effective (ME) for two consecutive years can also be terminated. The design of this study exploits that incentive structure by, put very simply, comparing the teachers who were directly above the ME and HE thresholds to those who were directly below them, and to see whether they differed in terms of retention and performance from those who were not. The basic idea is that these teachers are all very similar in terms of their measured performance, so any differences in outcomes can be (cautiously) attributed to the system’s incentives.

The short answer is that there were meaningful differences.