Skip to:

Education Debate

  • New Research Report: Are U.S. Schools Inefficient?

    Written on June 7, 2016

    At one point or another we’ve all heard some version of the following talking points: 1) “Spending on U.S. education has doubled or triped over the past few decades, but performance has remained basically flat; or 2) “The U.S. spends more on education than virtually any other nation and yet still gets worse results.” If you pay attention, you will hear one or both of these statements frequently, coming from everyone from corporate CEOs to presidential candidates.

    The purpose of both of these statements is to argue that U.S. education is inefficient - that is, gets very little bang for the buck – and that spending more money will not help.

    Now, granted, these sorts of pseudo-empirical talking points almost always omit important nuances yet, in some cases, they can still provide important information. But, putting aside the actual relative efficiency of U.S. schools, these particular statements about U.S. education spending and performance are so rife with oversimplification that they fail to provide much if any useful insight into U.S. educational efficiency or policy that affects it. Our new report, written by Rutgers University Professor Bruce D. Baker and Rutgers Ph.D. student Mark Weber, explains why and how this is the case. Baker and Weber’s approach is first to discuss why the typical presentations of spending and outcome data, particularly those comparing nations, are wholly unsuitable for the purpose of evaluating U.S. educational efficiency vis-à-vis that of other nations. They then go on to present a more refined analysis of the data by adjusting for student characteristics, inputs such as class size, and other factors. Their conclusions will most likely be unsatisfying for all “sides” of the education debate.

    READ MORE
  • Are U.S. Schools Resegregating?

    Written on May 23, 2016

    Last week, the U.S. Government Accountability Office (GAO) issued a report, part of which presented an analysis of access to educational opportunities among the nation’s increasingly low income and minority public school student population. The results, most generally, suggest that the proportion of the nation's schools with high percentages of lower income (i.e., subsidized lunch eligible) and Black and Hispanic students increased between 2000 and 2013.

    The GAO also reports that these schools, compared to those serving fewer lower income and minority students, tend to offer fewer math, science, and college prep courses, and also to suspend, expel, and hold back ninth graders at higher rates.

    These are, of course, important and useful findings. Yet the vast majority of the news coverage of the report focused on the interpretation of these results as showing that U.S. schools are “resegregating.” That is, the news stories portrayed the finding that a larger proportion of schools serve more than 75 percent Black and Hispanic students as evidence that schools became increasingly segregated between the 2000-01 and 2013-14 school years. This is an incomplete, somewhat misleading interpretation of the GAO findings. In order to understand why, it is helpful to discuss briefly how segregation is measured.

    READ MORE
  • New Report: Does Money Matter in Education? Second Edition

    Written on January 20, 2016

    In 2012, we released a report entitled “Does Money Matter in Education?,” written by Rutgers Professor Bruce Baker. The report presented a thorough, balanced review of the rather sizable body of research on the relationship between K-12 education spending and outcomes. The motivation for this report was to address the highly contentious yet often painfully oversimplified tribal arguments regarding the impact of education spending and finance reforms, as well as provide an evidence-based guide for policymakers during a time of severe budgetary hardship. It remains our most viewed resource ever, by far.

    Now, almost four years later, education spending in most states and localities is still in trouble. For example, state funding of education is lower in 2016 than it was in 2008 (prior to the recession) in 31 states (Leachman et al. 2016). Moreover, during this time, there has been a continuing effort to convince the public that how much we spend on schools doesn’t matter for outcomes, and that these spending cuts will do no harm.

    As is almost always the case, the evidence on spending in education is far more nuanced and complex than our debates about it (on both “sides” of the issue). And this evidence has been building for decades, with significant advances since the release of our first “Does Money Matter?” report. For this reason, we have today released the second edition, updated by the author. The report is available here.

    READ MORE
  • Teacher To Teacher: Classroom Reform Starts With “The Talk”

    Written on June 2, 2015

    Our guest author today is Melissa Halpern, a high school English teacher and Ed.M candidate at the Harvard Graduate School of Education. For the past 9 years, she's been dedicated to making schooling a happier, more engaging experience for a diverse range of students in Palm Beach County, FL.

    We teachers often complain, justifiably, that policy makers and even school administrators are too disconnected from the classroom to understand how students learn best. Research is one thing, we claim, but experience is another. As the only adults in the school setting who have ongoing, sustained experience with students, we’re in the best position to understand them—but do we really? Do we understand our students’ educational priorities, turn-ons, anxieties, and bones-to-pick in our classrooms and in the school at large?

    The truth is that no amount of research or experience makes us experts on the experiences and perspectives of the unique individuals who inhabit our classrooms. If we want to know what’s going on in their minds, we have to ask. We have to have “the school talk.”

    What have students learned that is important to them, and what do they wish they could learn? What makes them feel happy and empowered at school? What makes them feel bored, stressed, or dehumanized?

    READ MORE
  • The Debate And Evidence On The Impact Of NCLB

    Written on February 17, 2015

    There is currently a flurry of debate focused on the question of whether “NCLB worked.” This question, which surfaces regularly in the education field, is particularly salient in recent weeks, as Congress holds hearings on reauthorizing the law.

    Any time there is a spell of “did NCLB work?” activity, one can hear and read numerous attempts to use simple NAEP changes in order to assess its impact. Individuals and organizations, including both supporters and detractors of the law, attempt to make their cases by presenting trends in scores, parsing subgroups estimates, and so on. These efforts, though typically well-intentioned, do not, of course, tell us much of anything about the law’s impact. One can use simple, unadjusted NAEP changes to prove or disprove any policy argument. And the reason is that they are not valid evidence of an intervention's effects. There’s more to policy analysis than subtraction.

    But it’s not just the inappropriate use of evidence that makes these “did NCLB work?” debates frustrating and, often, unproductive. It is also the fact that NCLB really cannot be judged in simple, binary terms. It is a complex, national policy with considerable inter-state variation in design/implementation and various types of effects, intended and unintended. This is not a situation that lends itself to clear cut yes/no answers to the “did it work?” question.

    READ MORE
  • What You Need To Know About Misleading Education Graphs, In Two Graphs

    Written on September 25, 2014

    There’s no reason why insisting on proper causal inference can’t be fun.

    A weeks ago, ASCD published a policy brief (thanks to Chad Aldeman for flagging it), the purpose of which is to argue that it is “grossly misleading” to make a “direct connection” between nations’ test scores and their economic strength.

    On the one hand, it’s implausible to assert that better educated nations aren’t stronger economically. On the other hand, I can certainly respect the argument that test scores are an imperfect, incomplete measure, and the doomsday rhetoric can sometimes get out of control.

    In any case, though, the primary piece of evidence put forth in the brief was the eye-catching graph below, which presented trends in NAEP versus those in U.S. GDP and productivity.

    READ MORE
  • Regular Public And Charter Schools: Is A Different Conversation Possible?

    Written on September 18, 2014

    Uplifting Leadership, Andrew Hargreaves' new book with coauthors Alan Boyle and Alma Harris, is based on a seven-year international study, and illustrates how leaders from diverse organizations were able to lift up their teams by harnessing and balancing qualities that we often view as opposites, such as dreaming and action, creativity and discipline, measurement and meaningfulness, and so on.

    Chapter three, Collaboration With Competition, was particularly interesting to me and relevant to our series, "The Social Side of Reform." In that series, we've been highlighting research that emphasizes the value of collaboration and considers extreme competition to be counterproductive. But, is that always the case? Can collaboration and competition live under the same roof and, in combination, promote systemic improvement? Could, for example, different types of schools serving (or competing for) the same students work in cooperative ways for the greater good of their communities?

    Hargreaves and colleagues believe that establishing this environment is difficult but possible, and that it has already happened in some places. In fact, Al Shanker was one of the first proponents of a model that bears some similarity. In this post, I highlight some ideas and illustrations from Uplifting Leadership and tie them to Shanker's own vision of how charter schools, conceived as idea incubators and, eventually, as innovations within the public school system, could potentially lift all students and the entire system, from the bottom up, one group of teachers at a time.

    READ MORE
  • The Semantics of Test Scores

    Written on August 15, 2014

    Our guest author today is Jennifer Borgioli, a Senior Consultant with Learner-Centered Initiatives, Ltd., where she supports schools with designing performance based assessments, data analysis, and curriculum design.

    The chart below was taken from the 2014 report on student performance on the Grades 3-8 tests administered by the New York State Department of Education.

    Based on this chart, which of the following statements is the most accurate?

    A. “64 percent of 8th grade students failed the ELA test”

    B. “36 percent of 8th graders are at grade level in reading and writing”

    C. “36 percent of students meet or exceed the proficiency standard (Level 3 or 4) on the Grade 8 CCLS-aligned math test”

    READ MORE
  • Lost In Citation

    Written on July 31, 2014

    The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off," this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

    Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

    That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they're very common, and also because it's been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean.

    READ MORE
  • The Language Of Teacher Effectiveness

    Written on July 10, 2014

    There is a tendency in education circles these days, one that I'm sure has been discussed by others, and of which I myself have been "guilty," on countless occasions. The tendency is to use terms such “effective/ineffective teacher” or “teacher performance” interchangeably with estimates from value-added and other growth models.

    Now, to be clear, I personally am not opposed to the use of value-added estimates in teacher evaluations and other policies, so long as it is done cautiously and appropriately (which, in my view, is not happening in very many places). Moreover, based on my reading of the research, I believe that these estimates can provide useful information about teachers’ performance in the classroom. In short, then, I am not disputing whether value-added scores should be considered to be one useful proxy measure for teacher performance and effectiveness (and described as such), both formally and informally.

    Regardless of one's views on value-added and its policy deployment, however, there is a point at which our failure to define terms can go too far, and perhaps cause confusion.

    READ MORE

Pages

Subscribe to Education Debate

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.