Skip to:

A Few Points About The New CREDO Charter School Analysis


A few thoughts: - CREDO is hardly alone in using "days of learning" to explain the test score gains. But it's still highly misleading; the correct way to express this is in the terms of what we are measuring, which is test score items answered correctly. To state that a group gained "seven days of reading" over another has no analog in real-world teaching. - One of the largest problems with CREDO's methodology is that they don't disaggregate the data about student demographics as well as they could. There is no reason, for example, that CREDO couldn't have done a separate analysis for students eligible for free lunch AND students eligible for free or reduced-price lunch. One is an indicator of deeper poverty and could have provided a better match. Why conflate the two when the data is available to allow a finer distinction? - Same with special education, although I realize the data often doesn't allow for these distinctions. Still, there is all the difference in the world between a child with a severe cognitive impairment and one with a mild speech impediment. - Matt's right that the real question here is "why" some charters are better than others, but I'd put it a different way: Are "successful" charters replicable? Can we reproduce the gains of a "good" charter for a large number of students? Ultimately, CREDO never even addresses this question. If the small differences are mostly peer effect, it doesn't speak very well for charters, does it?

Matthew, I tihink you missed what is possibly the biggest problem with the study: 8% of the worst charters closed between 2009 through 2013. This introduces massive survivorship bias in the study, and renders the comparison to public school essentially meaningless. See here, for instance:

Out of curiosity, what does a .01 standard deviation of change reflect in terms of a test score? For example, would that correspond with one out of hundred students completing one more question correctly? And while single year gains can add up, they can also fluctuate from one year to the next as one year's cohort of students is better or worse than students that proceed or follow that group. I am struck by the fact that an improvement of .01 standard deviations is considered significant. 0.01 S.D. of improvement is not " a standard deviation here and a standard deviation there, and sooner or later you’re talking about large differences". On the other hand, I think you maybe on the mark about suggesting "the need to start asking a different set of questions." It may be as time goes on that regular public schools and charter schools look more and more alike in terms of performance measures in aggregate. In which case, you have to wonder if the results are simply a reflection of the variability of the particular school in question and whether those results really reflect being a charter or regular public school or the underlying demographics of that school. This is not to discount the success, a particular charter school (or regular public school) has achieved, but if we can't get at a curricular difference or a structural difference that is portable, what is the purpose of the charter school movement?


This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.