Data-Driven Instruction Can't Work If Instructors Don't Use The Data

In education today, data, particularly testing data, are everywhere. One of many potentially valuable uses of these data is helping teachers improve instruction – e.g., identifying students’ strengths and weaknesses, etc. Of course, this positive impact depends on the quality of the data and how it is presented to educators, among other factors. But there’s an even more basic requirement – teachers actually have to use it.

In an article published in the latest issue of the journal Education Finance and Policy, economist John Tyler takes a thorough look at teachers’ use of an online data system in a mid-sized urban district between 2008 and 2010. A few years prior, this district invested heavily in benchmark formative assessments (four per year) for students in grades 3-8, and an online “dashboard” system to go along with them. The assessments’ results are fed into the system in a timely manner. The basic idea is to give these teachers a continual stream of information, past and present, about their students’ performance.

Tyler uses weblogs from the district, as well as focus groups with teachers, to examine the extent and nature of teachers’ data usage (as well as a few other things, such as the relationship between usage and value-added). What he finds is not particularly heartening. In short, teachers didn’t really use the data.

It's Test Score Season, But Some States Don't Release Test Scores

** Reprinted here in the Washington Post

We’ve entered the time of year during which states and districts release their testing results. It’s fair to say that the two districts that get the most attention for their results are New York City and the District of Columbia Public Schools (DCPS), due in no small part to the fact that both enacted significant, high-profile policy changes over the past 5-10 years.

The manner in which both districts present annual test results is often misleading. Many of the issues, such as misinterpreting changes in proficiency rates as “test score growth” and chalking up all “gains” to recent policy changes, are quite common across the nation. These two districts are just among the more aggressive in doing so. That said, however, there’s one big difference between the test results they put out every year, and although I’ve noted it a few times before, I’d like to point it out once more: Unlike New York City/State, DCPS does not actually release test scores.

That’s right – despite the massive national attention to their “test scores," DCPS – or, specifically, the Office of the State Superintendent for Education (OSSE) – hasn’t released a single test score in many years. Not one.

The Ever-Changing NAEP Sample

The results of the latest National Assessment of Educational Progress long term trend tests (NAEP-LTT) were released last week. The data compare the reading and math scores of 9-, 13- and 17-year olds at various points since the early 1970s. This is an important way to monitor how these age cohorts’ performance changes over the long term.

Overall, there is ongoing improvement in scores among 9- and 13-year olds, in reading and especially math, though the trend is inconsistent and increases are somewhat slow in recent years. The scores for 17-year olds, in contrast, are relatively flat.

These data, of course, are cross-sectional – i.e., they don’t follow students over time, but rather compare children in the three age groups with their predecessors from previous years. This means that changes in average scores might be driven by differences, observable or unobservable, between cohorts. One of the simple graphs in this report, which doesn't present a single test score, illustrates that rather vividly.

The FCAT Writing, On The Wall

The annual release of state testing data makes the news in every state, but Florida is one of those places where it is to some degree a national story.*

Well, it’s getting to be that time of year again. Last week, the state released its writing exam (FCAT 2.0 Writing) results for 2013 (as well as the math and reading results for third graders only).  The Florida Department of Education (FLDOE) press release noted: “With significant gains in writing scores, Florida’s teachers and students continue to show that higher expectations and support at home and in the classroom enable every child to succeed.” This interpretation of the data was generally repeated without scrutiny in the press coverage of the results.

Putting aside the fact that the press release incorrectly calls the year-to-year changes “gains” (they are actually comparisons of two different groups of students; see here), the FLDOE's presentation of the FCAT Writing results, though common, is, at best, incomplete and, at worst, misleading. Moreover, the important issues in this case are applicable in all states, and unusually easy to illustrate using the simple data released to the public.

A Few Quick Fixes For School Accountability Systems

Our guest authors today are Morgan Polikoff and Andrew McEachin. Morgan is Assistant Professor in the Rossier School of Education at the University of Southern California. Andrew is an Institute of Education Science postdoctoral fellow at the University of Virginia.

In a previous post, we described some of the problems with the Senate's Harkin-Enzi plan for reauthorizing the No Child Left Behind Act, based on our own analyses, which yielded three main findings. First, selecting the bottom 5% of schools for intervention based on changes in California’s composite achievement index resulted in remarkably unstable rankings. Second, identifying the bottom 5% based on schools' lowest performing subgroup overwhelmingly targeted those serving larger numbers of special education students. Third and finally, we found evidence that middle and high schools were more likely to be identified than elementary schools, and smaller schools more likely than larger schools.

None of these findings was especially surprising (see here and here, for instance), and could easily have been anticipated. Thus, we argued that policymakers need to pay more attention to the vast (and rapidly expanding) literature on accountability system design.

The Cartography Of High Expectations

In October of last year, the education advocacy group ConnCAN published a report called “The Roadmap to Closing the Gap” in Connecticut. This report says that the state must close its large achievement gaps by 2020 – that is, within eight years – and they use to data to argue that this goal is “both possible and achievable."

There is value in compiling data and disaggregating them by district and school. And ConnCAN, to its credit, doesn't use this analysis as a blatant vehicle to showcase its entire policy agenda, as advocacy organizations often do. But I am compelled to comment on this report, mostly as a springboard to a larger point about expectations.

However, first things first – a couple of very quick points about the analysis. There are 60-70 pages of district-by-district data in this report, all of it portrayed as a “roadmap” to closing Connecticut’s achievement gap. But it doesn't measure gaps and won't close them.

When Growth Isn't Really Growth

Let’s try a super-simple thought experiment with data. Suppose we have an inner-city middle school serving grades 6-8. Students in all three grades take the state exam annually (in this case, we’ll say that it’s at the very beginning of the year). Now, for the sake of this illustration, let’s avail ourselves of the magic of hypotheticals and assume away many of the sources of error that make year-to-year changes in public testing data unreliable.

First, we’ll say that this school reports test scores instead of proficiency rates, and that the scores are comparable between grades. Second, every year, our school welcomes a new cohort of sixth graders that is the exact same size and has the exact same average score as preceding cohorts – 30 out of 100, well below the state average of 65. Third and finally, there is no mobility at this school. Every student who enters sixth grade stays there for three years, and goes to high school upon completion of eighth grade. No new students are admitted mid-year.

Okay, here’s where it gets interesting: Suppose this school is phenomenally effective in boosting its students’ scores. In fact, each year, every single student gains 20 points. It is the highest growth rate in the state. Believe it or not, using the metrics we commonly use to judge schoolwide “growth” or "gains," this school would still look completely ineffective. Take a look at the figure below.

A Simple Choice Of Words Can Help Avoid Confusion About New Test Results

In 1998, the National Institutes of Health (NIH) lowered the threshold at which people are classified as “overweight." Literally overnight, about 25 million Americans previously considered as having a healthy weight were now overweight. If, the next day, you saw a newspaper headline that said “number of overweight Americans increases," you would probably find that a little misleading. America’s “overweight” population didn’t really increase; the definition changed.

Fast forward to November 2012, during which Kentucky became the first state to release results from new assessments that were aligned with the Common Core Standards (CCS). This led to headlines such as, "Scores Drop on Kentucky’s Common Core-Aligned Tests" and "Challenges Seen as Kentucky’s Test Scores Drop As Expected." Yet, these descriptions unintentionally misrepresent what happened. It's not quite accurate - or at least highly imprecise - to say that test scores “dropped," just as it would have been wrong to say that the number of overweight Americans increased overnight in 1998 (actually, they’re not even scores, they’re proficiency rates). Rather, the state adopted different tests, with different content, a different design, and different standards by which students are deemed “proficient."

Over the next 2-3 years, a large group of states will also release results from their new CCS-aligned tests. It is important for parents, teachers, administrators, and other stakeholders to understand what the results mean. Most of them will rely on newspapers and blogs, and so one exceedingly simple step that might help out is some polite, constructive language-policing.

A Case Against Assigning Single Ratings To Schools

The new breed of school rating systems, some of which are still getting off the ground, will co-exist with federal proficiency targets in many states, and they are (or will be) used for a variety of purposes, including closure, resource allocation and informing parents and the public (see our posts on the systems in INFLOHCONYC).*

The approach that most states are using, in part due to the "ESEA flexibility" guidelines set by the U.S. Department of Education, is to combine different types of measures, often very crudely, into a single grade or categorical rating for each school. Administrators and media coverage usually characterize these ratings as measures of school performance - low-rated schools are called "low performing," while those receiving top ratings are characterized as "high performing." That's not accurate - or, at best, it's only partially true.

Some of the indicators that comprise the ratings, such as proficiency rates, are best interpreted as (imperfectly) describing student performance on tests, whereas other measures, such as growth model estimates, make some attempt to isolate schools’ contribution to that performance. Both might have a role to play in accountability systems, but they're more or less appropriate depending on how you’re trying to use them.

So, here’s my question: Why do we insist on throwing them all together into a single rating for each school? To illustrate why I think this question needs to be addressed, let’s take a quick look at four highly-simplified situations in which one might use ratings.

When You Hear Claims That Policies Are Working, Read The Fine Print

When I point out that raw changes in state proficiency rates or NAEP scores are not valid evidence that a policy or set of policies is “working," I often get the following response: “Oh Matt, we can’t have a randomized trial or peer-reviewed article for everything. We have to make decisions and conclusions based on imperfect information sometimes."

This statement is obviously true. In this case, however, it's also a straw man. There’s a huge middle ground between the highest-quality research and the kind of speculation that often drives our education debate. I’m not saying we always need experiments or highly complex analyses to guide policy decisions (though, in general, these are always preferred and sometimes required). The point, rather, is that we shouldn’t draw conclusions based on evidence that doesn't support those conclusions.

This, unfortunately, happens all the time. In fact, many of the more prominent advocates in education today make their cases based largely on raw changes in outcomes immediately after (or sometimes even before) their preferred policies were implemented (also see hereherehereherehere, and here). In order to illustrate the monumental assumptions upon which these and similar claims ride, I thought it might be fun to break them down quickly, in a highly simplified fashion. So, here are the four “requirements” that must be met in order to attribute raw test score changes to a specific policy (note that most of this can be applied not only to claims that policies are working, but also to claims that they're not working because scores or rates are flat):