Skip to:

The Year In Research On Market-Based Education Reform


Stuart, To the school focus explanation (which is plausible), I would add differences between high- and low-poverty schools in terms of: teacher labor markets; the applicant pool; the influence of peer effects and other unobserved school-level factors; and the relative performance oversubscribed charter schools. I’m sure there are multiple explanations, varying by context, some substantive and some methodological. I’d like to see some actual work on this, and I’m not aware of much already done. As for the persistent elusiveness of reading-improvement policies, it’s certainly true to a degree, but there is plenty of evidence of reading effects, in the charter literature and elsewhere (math effects are always stronger, but even small relative reading differences are detectable). Nevertheless, on the core issues - we’re in full agreement here. Reading is a skill that requires background knowledge largely taught by parents and other figures out of school. I think the best way to address these issues within the education system would be to complement the common core standards with a huge emphasis on common curriculum – making sure all students are being taught the best, research-based content that they need to read and write (and which would comprise the standards). This would, I think, go a LONG way. But even then, reading achievement is likely to remain highly resistant to intervention from education policies per se. I wish more commentators and especially policymakers would acknowledge this point and its implications. Finally, on your headline point, I suppose it would be misleading if the reason for the difference was definitely the different foci of the schools (as it may be for the two schools near you but not necessarily for the [unknown] schools in the study). Also, charter schools produced benefits in only one quadrant of the “poverty/subject matrix,” so I think it’s fair to characterize the results as no effect, especially given the fact that I was breezing through many papers, that I had described the findings in detail when I wrote about the study in July, and, most importantly, that I was making a point about why charters work rather than whether they do so. If you disagree, then we’ll have to add that to the list! Please keep reading/commenting.

I think pages 70-73 of the Mathematica middle school study are important. Quote: "In study charter schools that served more economically disadvantaged students (that is, schools in which the percentage eligible for free or reduced-price meals was above the sample median), the estimated impact on Year 2 mathematics scores was positive and significant (impact = 0.18; p-value = 0.002)." It goes on to say that charter schools serving black kids were better than those serving white kids. So YMMV, but for me, if charter schools are doing a good job helping poor black kids (who need improved schooling more than rich whites) at increasing their math abilities (which we expect schools to be able to affect more than reading), then that's a big plus.

Forty years of work as an inner city public school teacher, administrator, PTA president, researcher and public school advocate convince me that there is no single typical "district" or charter public school. They vary widely - from Chinese, German, Spanish immersion to core knowledge to project based to Core KNowledge, etc etc. We are wasting a huge amount of time, effort and money trying to decide which is better, district or charter. Instead, we should, as we did in Cincinnati (with help from the Cincinnati Federation of Teachers, learn from the best, whether district or charter. That led to significantly increased graduation rates and an elimination of the high school graduation gap between white and African American students.

Mr. Di Carlo, Could you help me understand what you meant in the following statement? "Strangely, the value-added analysis that got the most attention by far – and which became the basis for a series of LA Times stories – was also among the least consequential. The results were very much in line with more than a decade of studies on teacher effects." I looked at the link briefly, and it seems that the study shows that there are large differences in teacher quality as measured by value-added analysis. If this conclusion was used by the LA Times as part of their story, why was the study inconsequential? I'm not disagreeing. I'm just not familiar enough with this story to know what you mean, and I'd like to understand this important issue. Put another way, you cite lots of studies that show problems with value-added measures, yet for the LA study, you say that it is in line with more than a decade of research. How can these both be true? Is there a difference between the kinds of studies that I'm not understanding? Again, I'm not trying to be critical. I'm a teacher and I'd like to have a solid understanding of what the scholarship is teaching us. Thank you.

Hi Jeffrey, Thanks for your comment. Your misunderstanding is entirely my fault, as I should have been more clear with my language. First of all, when I said the LAT analysis was not especially “consequential,” I meant that the results were not new or surprising from a research perspective (i.e., they are in line with over a decade of prior work). The research on value-added demonstrates wide variation in teacher effects on test scores (I think convincingly), and so does the LAT analysis (which was a high-quality analysis, by the way). I suppose “consequential” is the wrong word to use there, especially since, in terms of public attention and impact, the LAT analysis (or at least the articles based on it) had a huge effect. But the findings were nothing new. “Surprising” or “original” might have been a better choice. As for whether there is a contradiction between the studies I cite and the value-added literature, the studies are less “problems with value-added” than contributions to it. As you know, this is how empirical research works – a body of evidence on a given topic accumulates, and new work addresses new angles and contexts, which adds to the greater understanding of the topic. However, it’s a whole different ballgame when you use this research in “real life” – in this case, using value-added estimates in high-stakes decisions about teachers. To do so, you have to be confident that the methods are “ready,” and that there is a good idea of how to use them properly. And this stuff is still evolving rapidly (in part because the special datasets linking teachers and students haven’t been widely collected until fairly recently). Many of the most important questions – such as whether school poverty or school “match” affect the results (and a dozen other issues), and how one might account for these factors – are just recently being addressed. Yet so many states and districts, which might have benefited from this research, have already put new systems in place (and, for some of them, the haste shows). So, what I was trying to say (again, apparently not very clearly) was that the research demonstrates that teacher effects vary, but it’s still an open question as to whether the estimates are accurate and/or stable enough to identify where *individual* teachers fall within that varying distribution. The LAT analysis didn’t contribute much new information to this research effort, even though it had a huge effect in a different way (sparking debate and controversy). If you’re interested, I discuss some of these “theory/practice” issues in greater detail here: I hope this answers your question. Let me know if it doesn’t. And thanks for being a dedicated teacher who cares enough to muddle through this stuff. Matt

Mr. Di Carlo, Thanks for your detailed response. I think I get it now. There are (sometimes large) differences between teachers, in terms of quality, but it's difficult to know with precision exactly who those people are, especially given the usually small amounts of data we have to work with. I find this research fascinating and will continue to read your blog. Thanks again for taking the time to respond.

Good summary overall. I'd quibble with this description of the Mathematica middle school study: "Mathematica researchers also released an experimental evaluation showing no test score benefits of charter middle schools." No benefits on average, yes, but the overall average masks a crucial distinction: "we found that study charter schools serving more low income or low achieving students had statistically significant positive effects on math test scores, while charter schools serving more advantaged students—those with higher income and prior achievement—had significant negative effects on math test scores." Based on that more detailed description of their findings, it looks as if charter schools are doing very different things for different students -- raising up low income and poor scoring students while actually harming richer and higher scoring students. So if one wanted to expand charter schools in impoverished urban areas, the "no overall average benefit" finding would be beside the point.

Thanks for the kind word, Stuart. Fair enough on the Mathematica report (had to resist detail to keep the post reasonably short), though the effect among lower-income students was significant in math only (and, of course, the coefficients measure the relative, not the absolute, charter impact). Also, as you know, my point there was less about whether charters work than about why. From that angle, the discrepancy in results between high- and low-poverty schools is, in my view, not at all beside the point. It’s a critical issue. If the charter concept is sound, shouldn’t it produce better results in most cases, instead of in just one subject and “type” of school? Why would effects vary by student characteristics (and, for that matter, subject)? I could speculate, but that’s all it would be. It’s an interesting question, and an important one.

Here's what I would theorize: 1. I tend to agree with Mike Petrilli's explanation ( urban charter schools tend to have the express focus of raising achievement, while lots of suburban charter schools have a very different purpose (to offer a more creative and progressive alternative to the traditional public schools). So both types of charter schools could be succeeding at what they're trying to do, even though the "average" achievement gain is nil. 2. Finding achievement gains in math but not reading is what seems to happen in just about every educational study ever done. I exaggerate perhaps, but not by much. I suspect this is because schools are the main place in life where children learn and do math, whereas reading is a skill that most parents practice with their children. (Parents are more likely to read a book every night to their kids than to do a worksheet of long division problems). More than that: reading comprehension in higher grades is closely tied to background knowledge, and background knowledge is something that would seem to be affected much more by out-of-school factors than math.

Just on an anecdotal basis, the two charter schools near where I'm from are the Benton County School of the Arts, and Haas Hall (a science/math academy). If test scores at the latter go up (because math is one of their focuses) while test scores at the former don't do as well (because their whole purpose is to cater to kids who are more into ballet, music, art, etc., rather than test scores), then averaging their performance together would create the headline: "NO AVERAGE TEST SCORE GAINS FROM CHARTER SCHOOLS!" But wouldn't that be the wrong way to look at it?


This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.