A Descriptive Analysis Of The 2014 D.C. Charter School Ratings

The District of Columbia Public Charter School Board (PCSB) recently released the 2014 results of their “Performance Management Framework” (PMF), which is the rating system that the PCSB uses for its schools.

Very quick background: This system sorts schools into one of three “tiers," with Tier 1 being the highest-performing, as measured by the system, and Tier 3 being the lowest. The ratings are based on a weighted combination of four types of factors -- progress, achievement, gateway, and leading -- which are described in detail in the first footnote.* As discussed in a previous post, the PCSB system, in my opinion, is better than many others out there, since growth measures play a fairly prominent role in the ratings, and, as a result, the final scores are only moderately correlated with key student characteristics such as subsidized lunch eligibility.** In addition, the PCSB is quite diligent about making the PMF results accessible to parents and other stakeholders, and, for the record, I have found the staff very open to sharing data and answering questions.

That said, PCSB's big message this year was that schools’ ratings are improving over time, and that, as a result, a substantially larger proportion of DC charter students are attending top-rated schools. This was reported uncritically by several media outlets, including this story in the Washington Post. It is also based on a somewhat questionable use of the data. Let’s take a very simple look at the PMF dataset, first to examine this claim and then, more importantly, to see what we can learn about the PMF and DC charter schools in 2013 and 2014.

The Equity Projection

A new Mathematica report examines the test-based impact of The Equity Project (TEP), a New York City charter school serving grades 5-8. TEP opened up for the 2009-10 school year, receiving national attention mostly due to one unusual policy: They paid teachers $125,000 per year, regardless of experience and education, in addition to annual bonuses (up to $25,000) for returning teachers. TEP largely makes up for these unusually high salary costs by minimizing the number of administrators and maintaining larger class sizes.

As is typical of Mathematica, the TEP analysis is thorough and well-done. The school's students' performance is compared to that of similar peers with a comparable probability of enrolling in TEP, as identified with propensity scores. In general, the study’s results were quite positive. Although there were statistically discernible negative impacts of attendance for TEP’s first cohort of students during their first two years, the cumulative estimated test-based impact was significant, positive and educationally meaningful after three and four years of attendance. As always, the estimated effect was stronger in math than in reading (estimated effect sizes for the former were very large in magnitude). The Mathematica researchers also present analyses on student attrition, which did not appear to bias the estimates substantially, and they also show that their primary results are robust when using alternative specifications (e.g., different matching techniques, score transformations, etc.).

Now we get to the tricky questions about these results: What caused them and what can be learned as a result? That’s the big issue with charter analyses in general (and with research on many other interventions): One can almost never separate the “why” from the “what” with any degree of confidence. And TEP, with its "flagship policy" of high teacher salaries, which might appeal to all "sides" in the education policy debate, provides an interesting example in this respect.

Regular Public And Charter Schools: Is A Different Conversation Possible?

Uplifting Leadership, Andrew Hargreaves' new book with coauthors Alan Boyle and Alma Harris, is based on a seven-year international study, and illustrates how leaders from diverse organizations were able to lift up their teams by harnessing and balancing qualities that we often view as opposites, such as dreaming and action, creativity and discipline, measurement and meaningfulness, and so on.

Chapter three, Collaboration With Competition, was particularly interesting to me and relevant to our series, "The Social Side of Reform." In that series, we've been highlighting research that emphasizes the value of collaboration and considers extreme competition to be counterproductive. But, is that always the case? Can collaboration and competition live under the same roof and, in combination, promote systemic improvement? Could, for example, different types of schools serving (or competing for) the same students work in cooperative ways for the greater good of their communities?

Hargreaves and colleagues believe that establishing this environment is difficult but possible, and that it has already happened in some places. In fact, Al Shanker was one of the first proponents of a model that bears some similarity. In this post, I highlight some ideas and illustrations from Uplifting Leadership and tie them to Shanker's own vision of how charter schools, conceived as idea incubators and, eventually, as innovations within the public school system, could potentially lift all students and the entire system, from the bottom up, one group of teachers at a time.

The Thrill Of Success, The Agony Of Measurement

** Reprinted here in the Washington Post

The recent release of the latest New York State testing results created a little public relations coup for the controversial Success Academies charter chain, which operates over 20 schools in New York City, and is seeking to expand.

Shortly after the release of the data, the New York Post published a laudatory article noting that seven of the Success Academies had overall proficiency rates that were among the highest in the state, and arguing that the schools “live up to their name." The Daily News followed up by publishing an op-ed that compares the Success Academies' combined 94 percent math proficiency rate to the overall city rate of 35 percent, and uses that to argue that the chain should be allowed to expand because its students “aced the test” (this is not really what high proficiency rates mean, but fair enough).

On the one hand, this is great news, and a wonderfully impressive showing by these students. On the other, decidedly less sensational hand, it's also another example of the use of absolute performance indicators (e.g., proficiency rates) as measures of school rather than student performance, despite the fact that they are not particularly useful for the former purpose since, among other reasons, they do not account for where students start out upon entry to the school. I personally don't care whether Success Academy gets good or bad press. I do, however, believe that how one gauges effectiveness, test-based or otherwise, is important, even if one reaches the same conclusion using different measures.

A Few More Points About Charter Schools And Extended Time

A few weeks ago, I wrote a post that made a fairly simple point about the practice of expressing estimated charter effects on test scores as “days of additional learning”: Among the handful of states, districts, and multi-site operators that consistently have been shown to have a positive effect on testing outcomes, might not those “days of learning” be explained, at least in part, by the fact that they actually do offer additional days of learning, in the form of much longer school days and years?

That is, there is a small group of charter models/chains that seem to get good results. There are many intangible factors that make a school effective, but to the degree we can chalk this up to concrete practices or policies, additional time may be the most compelling possibility. Although it’s true that school time must be used wisely, it’s difficult to believe that the sheer amount of extra time that the flagship chains offer would not improve testing performance substantially.

To their credit, many charter advocates do acknowledge the potentially crucial role of extended time in explaining their success stories. And the research, tentative though it still is, is rather promising. Nevertheless, there are a few important points that bear repeating when it comes to the idea of massive amounts of additional time, particularly given the fact that there is a push to get regular public schools to adopt the practice.

Estimated Versus Actual Days Of Learning In Charter School Studies

One of the purely presentational aspects that separates the new “generation” of CREDO charter school analyses from the old is that the more recent reports convert estimated effect sizes from standard deviations into a “days of learning” metric. You can find similar approaches in other reports and papers as well.

I am very supportive of efforts to make interpretation easier for those who aren’t accustomed to thinking in terms of standard deviations, so I like the basic motivation behind this. I do have concerns about this particular conversion -- specifically, that it overstates things a bit -- but I don’t want to get into that issue. If we just take CREDO’s “days of learning” conversion at face value, my primary, far more simple reaction to hearing that a given charter school sector's impact is equivalent to a given number of additional "days of learning" is to wonder: Does this charter sector actually offer additional “days of learning," in the form of longer school days and/or years?

This matters to me because I (and many others) have long advocated moving past the charter versus regular public school “horserace” and trying to figure out why some charters seem to do very well and others do not. Additional time is one of the more compelling observable possibilities, and while they're not perfectly comparable, it fits nicely with the "days of learning" expression of effect sizes. Take New York City charter schools, for example.

Extended School Time Proposals And Charter Schools

One of the (many) education reform proposals that has received national attention over the past few years is “extended learning time” – that is, expanding the day and/or year to give students more time in school.

Although how schools use the time they have with students, of course, is not necessarily more or less important than how much time they have with those students, the proposal to expand the school day/year may have merit, particularly for schools and districts serving larger proportions of students who need to catch up. I have noticed that one of the motivations for the extended time push is the (correct) observation that the charter school models that have proven effective (at least by the standard of test score gains) utilize extended time.

On the one hand, this is a good example of what many (including myself) have long advocated – that the handful of successful charter school models can potentially provide a great deal of guidance for all schools, regardless of their governance structure. On the other hand, it is also important to bear in mind that many of the high-profile charter chains that receive national attention don’t just expand their school years by a few days or even a few weeks, as has been proposed in several states. In many cases, they extend it by months.

Revisiting The Issue Of Charter Schools And Special Education Students

One of the most common claims against charter schools is that they “push out” special education students. The basic idea is that charter operators, who are obsessed with being able to show strong test results and thus bolster their reputations and enrollment, subtlety or not-so-subtlety “counsel out” students with special education plans (or somehow discourage their enrollment).

This is, of course, a serious issue, one that is addressed directly in a recent report from the Center for Reinventing Public Education (CRPE), which presents an analysis of data from a sample of New York City charter elementary schools (and compares them to regular public schools in the city). It is important to note that many of the primary results of this study, including those focused on the "pushing out" issue, cannot be used to draw any conclusions about charters across the nation. There were only 25 NYC charters included in that (lottery) analysis, all of them elementary schools, and these were not necessarily representative of the charter sector in the city, to say nothing of charters nationwide.

That said, the report, written by Marcus Winters, finds, among other things, that charters enroll a smaller proportion of special education students than regular public schools (as is the case elsewhere), and that this is primarily because these students are less likely to apply for entrance to charters (in this case, in kindergarten) than their regular education peers. He also presents results suggesting that this gap actually grows in later grades, mostly because of charters being less likely to classify students as having special needs, and more likely to reclassify them as not having special needs once they have been put on a special education plan (whether or not these classifications and declassifications are appropriate is of course not examined in this report).

The Year In Research On Market-Based Education Reform: 2013 Edition

In the three most discussed and controversial areas of market-based education reform – performance pay, charter schools and the use of value-added estimates in teacher evaluations – 2013 saw the release of a couple of truly landmark reports, in addition to the normal flow of strong work coming from the education research community (see our reviews from 2010, 2011 and 2012).*

In one sense, this building body of evidence is critical and even comforting, given not only the rapid expansion of charter schools, but also and especially the ongoing design and implementation of new teacher evaluations (which, in many cases, include performance-based pay incentives). In another sense, however, there is good cause for anxiety. Although one must try policies before knowing how they work, the sheer speed of policy change in the U.S. right now means that policymakers are making important decisions on the fly, and there is great deal of uncertainty as to how this will all turn out.

Moreover, while 2013 was without question an important year for research in these three areas, it also illustrated an obvious point: Proper interpretation and application of findings is perhaps just as important as the work itself.

A Quick Look At The DC Charter School Rating System

Having taken a look at several states’ school rating systems  (see our posts on the systems in IN, OH, FL and CO), I thought it might be interesting to examine a system used by a group of charter schools – starting with the system used by charters in the District of Columbia. This is the third year the DC charter school board has released the ratings.

For elementary and middle schools (upon which I will focus in this post*), the DC Performance Management Framework (PMF) is a weighted index composed of: 40 percent absolute performance; 40 percent growth; and 20 percent what they call “leading indicators” (a more detailed description of this formula can be found in the second footnote).** The index scores are then sorted into one of three tiers, with Tier 1 being the highest, and Tier 3 the lowest.

So, these particular ratings weight absolute performance – i.e., how highly students score on tests – a bit less heavily than do most states that have devised their own systems, and they grant slightly more importance to growth and alternative measures. We might therefore expect to find a somewhat weaker relationship between PMF scores and student characteristics such as free/reduced price lunch eligibility (FRL), as these charters are judged less predominantly on the students they serve. Let’s take a quick look.