Skip to:

ESEA

  • Who's Afraid of Virginia's Proficiency Targets?

    Written on September 5, 2012

    The accountability provisions in Virginia’s original application for “ESEA flexibility” (or "waiver") have received a great deal of criticism (see here, here, here and here). Most of this criticism focused on the Commonwealth's expectation levels, as described in “annual measurable objectives” (AMOs) – i.e., the statewide proficiency rates that its students are expected to achieve at the completion of each of the next five years, with separate targets established for subgroups such as those defined by race (black, Hispanic, Asian, white), income (subsidized lunch eligibility), limited English proficiency (LEP), and special education.

    Last week, in response to the criticism, Virginia agreed to amend its application, and it’s not yet clear how specifically they will calculate the new rates (only that lower-performing subgroups will be expected to make faster progress).

    In the meantime, I think it’s useful to review a few of the main criticisms that have been made over the past week or two and what they mean. The actual table containing the AMOs is pasted below (for math only; reading AMOs will be released after this year, since there’s a new test).

    READ MORE
  • Senate's Harkin-Enzi ESEA Plan Is A Step Sideways

    Written on July 31, 2012

    Our guest authors today are Morgan Polikoff and Andrew McEachin. Morgan is Assistant Professor in the Rossier School of Education at the University of Southern California. Andrew is an Institute of Education Science postdoctoral fellow at the University of Virginia.

    By now, it is painfully clear that Congress will not be revising the Elementary and Secondary Education Act (ESEA) before the November elections. And with the new ESEA waivers, who knows when the revision will happen? Congress, however, seems to have some ideas about what next-generation accountability should look like, so we thought it might be useful to examine one leading proposal and see what the likely results would be.

    The proposal we refer to is the Harkin-Enzi plan, available here for review. Briefly, the plan identifies 15 percent of schools as targets of intervention, classified in three groups. First are the persistently low-achieving schools (PLAS); these are the 5 percent of schools that are the lowest performers, based on achievement level or a combination of level and growth. Next are the achievement gap schools (AGS); these are the 5 percent of schools with the largest achievement gaps between any two subgroups. Last are the lowest subgroup achievement schools (LSAS); these are the 5 percent of schools with the lowest achievement for any significant subgroup.

    The goal of this proposal is both to reduce the number of schools that are identified as low-performing and to create a new operational definition of consistently low-performing schools. To that end, we wanted to know what kinds of schools these groups would target and how stable the classifications would be over time.

    READ MORE
  • Labor Market Behavior Actually Matters In Labor Market-Based Education Reform

    Written on July 26, 2012

    Economist Jesse Rothstein recently released a working paper about which I am compelled to write, as it speaks directly to so many of the issues that we have raised here over the past year or two. The purpose of Rothstein’s analysis is to move beyond the talking points about teaching quality in order to see if strategies that have been proposed for improving it might yield benefits. In particular, he examines two labor market-oriented policies: performance pay and dismissing teachers.

    Both strategies are, at their cores, focused on selection (and deselection) – in other words, attracting and retaining higher-performing candidates and exiting, directly or indirectly, lower-performing incumbents. Both also take time to work and have yet to be experimented with systematically in most places; thus, there is relatively little evidence on the long-term effects of either.

    Rothstein’s approach is to model this complex dynamic, specifically the labor market behavior of teachers under these policies (i.e., choosing, leaving and staying in teaching), which is often ignored or assumed away, despite the fact that it is so fundamental to the policies themselves. He then calculates what would happen under this model as a result of performance pay and dismissal policies – that is, how they would affect the teacher labor market and, ultimately, student performance.*

    Of course, this is just a simulation, and must be (carefully) interpreted as such, but I think the approach and findings help shed light on three fundamental points about education reform in the U.S.

    READ MORE
  • Examining Principal Turnover

    Written on July 16, 2012

    Our guest author today is Ed Fuller, Associate Professor in the Education Leadership Department at Penn State University. He is also the Director of the Center for Evaluation and Education Policy Analysis as well as the Associate Director for Policy of the University Council for Educational Administration.

    “No one knows who I am," exclaimed a senior in a high-poverty, predominantly minority and low-performing high school in the Austin area. She explained, “I have been at this school four years and had four principals and six algebra I teachers."

    Elsewhere in Texas, the first school to be closed by the state for low performance was Johnston High School, which was led by 13 principals in the 11 years preceding closure. The school also had a teacher turnover rate greater than 25 percent for almost all of the years and greater than 30 percent for 7 of the years.

    While the above examples are rather extreme cases, they do underscore two interconnected issues – teacher and principal turnover - that often plague low-performing schools and, in the case of principal turnover, afflict a wide range of schools regardless of performance or school demographics.

    READ MORE
  • Low-Income Students In The CREDO Charter School Study

    Written on July 10, 2012

    A recent Economist article on charter schools, though slightly more nuanced than most mainstream media treatments of the charter evidence, contains a very common, somewhat misleading argument that I’d like to address quickly. It’s about the findings of the so-called "CREDO study," the important (albeit over-cited) 2009 national comparison of student achievement in charter and regular public schools in 16 states.

    Specifically, the article asserts that the CREDO analysis, which finds a statistically discernible but very small negative impact of charters overall (with wide underlying variation), also finds a significant positive effect among low-income students. This leads the Economist to conclude that the entire CREDO study “has been misinterpreted," because it’s real value is in showing that “the children who most need charters have been served well."

    Whether or not an intervention affects outcomes among subgroups of students is obviously important (though one has hardly "misinterpreted" a study by focusing on its overall results). And CREDO does indeed find a statistically significant, positive test-based impact of charters on low-income students, vis-à-vis their counterparts in regular public schools. However, as discussed here (and in countless textbooks and methods courses), statistical significance only means we can be confident that the difference is non-zero (it cannot be chalked up to random fluctuation). Significant differences are often not large enough to be practically meaningful.

    And this is certainly the case with CREDO and low-income students.

    READ MORE
  • The Data Are In: Experiments In Policy Are Worth It

    Written on July 9, 2012

    Our guest author today is David Dunning, professor of psychology at Cornell University, and a fellow of both the American Psychological Society and the American Psychological Association. 

    When I was a younger academic, I often taught a class on research methods in the behavioral sciences. On the first day of that class, I took as my mission to teach students only one thing—that conducting research in the behavioral sciences ages a person. I meant that in two ways. First, conducting research is humbling and frustrating. I cannot count the number of pet ideas I have had through the years, all of them beloved, that have gone to die in the laboratory at the hands of data unwilling to verify them.

    But, second, there is another, more positive way in which research ages a person. At times, data come back and verify a cherished idea, or even reveal a more provocative or valuable one that no one has never expected. It is a heady experience in those moments for the researcher to know something that perhaps no one else knows, to be wiser—more aged if you will—in a small corner of the human experience that he or she cares about deeply.

    READ MORE
  • Gender Pay Gaps And Educational Achievement Gaps

    Written on June 13, 2012

    There is currently an ongoing rhetorical war of sorts over the gender wage gap. One “side” makes the common argument that women earn around 75 cents on the male dollar (see here, for example).

    Others assert that the gender gap is a myth, or that it is so small as to be unimportant.

    Often, these types of exchanges are enough to exasperate the casual observer, and inspire claims such as “statistics can be made to say anything." In truth, however, the controversy over the gender gap is a good example of how descriptive statistics, by themselves, say nothing. What matters is how they’re interpreted.

    Moreover, the manner in which one must interpret various statistics on the gender gap applies almost perfectly to the achievement gaps that are so often mentioned in education debates.

    READ MORE
  • Quality Control In Charter School Research

    Written on May 18, 2012

    There's a fairly large body of research showing that charter schools vary widely in test-based performance relative to regular public schools, both by location as well as subgroup. Yet, you'll often hear people point out that the highest-quality evidence suggests otherwise (see here, here and here) - i.e., that there are a handful of studies using experimental methods (randomized controlled trials, or RCTs) and these analyses generally find stronger, more uniform positive charter impacts.

    Sometimes, this argument is used to imply that the evidence, as a whole, clearly favors charters, and, perhaps by extension, that many of the rigorous non-experimental charter studies - those using sophisticated techniques to control for differences between students - would lead to different conclusions were they RCTs.*

    Though these latter assertions are based on a valid point about the power of experimental studies (the few of which we have are often ignored in the debate over charters), they are dubiously overstated for a couple of reasons, discussed below. But a new report from the (indispensable) organization Mathematica addresses the issue head on, by directly comparing estimates of charter school effects that come from an experimental analysis with those from non-experimental analyses of the same group of schools.

    The researchers find that there are differences in the results, but many are not statistically significant and those that are don't usually alter the conclusions. This is an important (and somewhat rare) study, one that does not, of course, settle the issue, but does provide some additional tentative support for the use of strong non-experimental charter research in policy decisions.

    READ MORE
  • The Test-Based Evidence On New Orleans Charter Schools

    Written on April 27, 2012

    Charter schools in New Orleans (NOLA) now serve over four out of five students in the city – the largest market share of any big city in the nation. As of the 2011-12 school year, most of the city’s schools (around 80 percent), charter and regular public, are overseen by the Recovery School District (RSD), a statewide agency created in 2003 to take over low-performing schools, which assumed control of most NOLA schools in Katrina’s aftermath.

    Around three-quarters of these RSD schools (50 out of 66) are charters. The remainder of NOLA’s schools are overseen either by the Orleans Parish School Board (which is responsible for 11 charters and six regular public schools, and taxing authority for all parish schools) or by the Louisiana Board of Elementary and Secondary Education (which is directly responsible for three charters, and also supervises the RSD).

    New Orleans is often held up as a model for the rapid expansion of charter schools in other urban districts, based on the argument that charter proliferation since 2005-06 has generated rapid improvements in student outcomes. There are two separate claims potentially embedded in this argument. The first is that the city’s schools perform better that they did pre-Katrina. The second is that NOLA’s charters have outperformed the city’s dwindling supply of traditional public schools since the hurricane.

    Although I tend strongly toward the viewpoint that whether charter schools "work" is far less important than why - e.g., specific policies and practices - it might nevertheless be useful to quickly address both of the claims above, given all the attention paid to charters in New Orleans.

    READ MORE
  • The Allure Of Teacher Quality

    Written on April 23, 2012

    Those following education know that policy focused on "teacher quality" is by far the dominant paradigm for improving  schools over the past few years. Some (but not nearly all) components of this all-hands-on-deck effort are perplexing to many teachers, and have generated quite a bit of pushback. No matter one’s opinion of this approach, however, what drives it is the tantalizing allure of variation in teacher quality.

    Fueled by the ever-increasing availability of detailed test score datasets linking teachers to students, the research literature on teachers’ test-based effectiveness has grown rapidly, in both size and sophistication. Analysis after analysis finds that, all else being equal, the variation in teachers’ estimated effects on students' test growth – the difference between the “top” and “bottom” teachers – is very large. In any given year, some teachers’ students make huge progress, others’ very little. Even if part of this estimated variation is attributable to confounding factors, the discrepancies are still larger than most any other measured "input" within the jurisdiction of education policy. The underlying assumption here is that “true” teacher quality varies to a degree that is at least somewhat comparable in magnitude to the spread of the test-based estimates.

    Perhaps that's the case, but it does not, by itself, help much. The key question is whether and how we can measure teacher performance at the individual level and, more importantly, influence the distribution – that is, to raise the ceiling, the middle and/or the floor. The variation hangs out there like a drug to which we’re addicted, but haven’t really figured out how to administer. If there was some way to harness it efficiently, the potential benefits could be considerable. The focus of current education policy is in large part an effort to do anything and everything to try and figure this out. And, as might be expected given the enormity of the task, progress has been slow.

    READ MORE

Pages

Subscribe to ESEA

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.