Skip to:

Education Reporting

  • Are U.S. Schools Resegregating?

    Written on May 23, 2016

    Last week, the U.S. Government Accountability Office (GAO) issued a report, part of which presented an analysis of access to educational opportunities among the nation’s increasingly low income and minority public school student population. The results, most generally, suggest that the proportion of the nation's schools with high percentages of lower income (i.e., subsidized lunch eligible) and Black and Hispanic students increased between 2000 and 2013.

    The GAO also reports that these schools, compared to those serving fewer lower income and minority students, tend to offer fewer math, science, and college prep courses, and also to suspend, expel, and hold back ninth graders at higher rates.

    These are, of course, important and useful findings. Yet the vast majority of the news coverage of the report focused on the interpretation of these results as showing that U.S. schools are “resegregating.” That is, the news stories portrayed the finding that a larger proportion of schools serve more than 75 percent Black and Hispanic students as evidence that schools became increasingly segregated between the 2000-01 and 2013-14 school years. This is an incomplete, somewhat misleading interpretation of the GAO findings. In order to understand why, it is helpful to discuss briefly how segregation is measured.

    READ MORE
  • Teachers And Education Reform, On A Need To Know Basis

    Written on July 1, 2014

    A couple of weeks ago, the website Vox.com published an article entitled, “11 facts about U.S. teachers and schools that put the education reform debate in context." The article, in the wake of the Vergara decision, is supposed to provide readers with the “basic facts” about the current education reform environment, with a particular emphasis on teachers. Most of the 11 facts are based on descriptive statistics.

    Vox advertises itself as a source of accessible, essential, summary information -- what you "need to know" -- for people interested in a topic but not necessarily well-versed in it. Right off the bat, let me say that this is an extraordinarily difficult task, and in constructing lists such as this one, there’s no way to please everyone (I’ve read a couple of Vox’s education articles and they were okay).

    That said, someone sent me this particular list, and it’s pretty good overall, especially since it does not reflect overt advocacy for given policy positions, as so many of these types of lists do. But I was compelled to comment on it. I want to say that I did this to make some lofty point about the strengths and weaknesses of data and statistics packaged for consumption by the general public. It would, however, be more accurate to say that I started doing it and just couldn't stop. In any case, here’s a little supplemental discussion of each of the 11 items:

    READ MORE
  • Immediate Gratification And Education Policy

    Written on December 9, 2013

    A couple of months ago, Bill Gates said something that received a lot of attention. With regard to his foundation’s education reform efforts, which focus most prominently on teacher evaluations, but encompass many other areas, he noted, “we don’t know if it will work." In fact, according to Mr. Gates, “we won’t know for probably a decade."

    He’s absolutely correct. Most education policies, including (but not limited to) those geared toward shifting the distribution of teacher quality, take a long time to work (if they do work), and the research assessing these policies requires a great deal of patience. Yet so many of the most prominent figures in education policy routinely espouse the opposite viewpoint: Policies are expected to have an immediate, measurable impact (and their effects are assessed in the crudest manner imaginable).

    A perfect example was the reaction to the recent release of results of the National Assessment of Educational Progress (NAEP).

    READ MORE
  • New York State Of Mind

    Written on August 13, 2013

    Last week, the results of New York’s new Common Core-aligned assessments were national news. For months, officials throughout the state, including New York City, have been preparing the public for the release of these data.

    Their basic message was that the standards, and thus the tests based upon them, are more difficult, and they represent an attempt to truly gauge whether students are prepared for college and the labor market. The inevitable consequence of raising standards, officials have been explaining, is that fewer students will be “proficient” than in previous years (which was, of course, the case) – this does not mean that students are performing worse, only that they are being held to higher expectations, and that the skills and knowledge being assessed require a new, more expansive curriculum. Therefore, interpretation of the new results versus those in previous year must be extremely cautious, and educators, parents and the public should not jump to conclusions about what they mean.

    For the most part, the main points of this public information campaign are correct. It would, however, be wonderful if similar caution were evident in the roll-out of testing results in past (and, more importantly, future) years.

    READ MORE
  • A Quick Look At "Best High School" Rankings

    Written on May 13, 2013

    ** Reprinted here in the Washington Post

    Every year, a few major media outlets publish high school rankings. Most recently, Newsweek (in partnership with The Daily Beast) issued its annual list of the “nation’s best high schools." Their general approach to this task seems quite defensible: To find the high schools that “best prepare students for college."

    The rankings are calculated using six measures: graduation rate (25 percent); college acceptance rate (25); AP/IB/AICE tests taken per student (25); average SAT/ACT score (10); average AP/IB/AICE score (10); and the percentage of students enrolled in at least one AP/IB/AICE course (5).

    Needless to say, even the most rigorous, sophisticated measures of school performance will be imperfect at best, and the methods behind these lists have been subject to endless scrutiny. However, let's take a quick look at three potentially problematic issues with the Newsweek rankings, how the results might be interpreted, and how the system compares with that published by U.S. News and World Report.

    READ MORE
  • A Simple Choice Of Words Can Help Avoid Confusion About New Test Results

    Written on January 9, 2013

    In 1998, the National Institutes of Health (NIH) lowered the threshold at which people are classified as “overweight." Literally overnight, about 25 million Americans previously considered as having a healthy weight were now overweight. If, the next day, you saw a newspaper headline that said “number of overweight Americans increases," you would probably find that a little misleading. America’s “overweight” population didn’t really increase; the definition changed.

    Fast forward to November 2012, during which Kentucky became the first state to release results from new assessments that were aligned with the Common Core Standards (CCS). This led to headlines such as, "Scores Drop on Kentucky’s Common Core-Aligned Tests" and "Challenges Seen as Kentucky’s Test Scores Drop As Expected." Yet, these descriptions unintentionally misrepresent what happened. It's not quite accurate - or at least highly imprecise - to say that test scores “dropped," just as it would have been wrong to say that the number of overweight Americans increased overnight in 1998 (actually, they’re not even scores, they’re proficiency rates). Rather, the state adopted different tests, with different content, a different design, and different standards by which students are deemed “proficient."

    Over the next 2-3 years, a large group of states will also release results from their new CCS-aligned tests. It is important for parents, teachers, administrators, and other stakeholders to understand what the results mean. Most of them will rely on newspapers and blogs, and so one exceedingly simple step that might help out is some polite, constructive language-policing.

    READ MORE
  • Are Teachers Changing Their Minds About Education Reform?

    Written on December 14, 2012

    ** Reprinted here in the Washington Post

    In a recent Washington Post article called “Teachers leaning in favor of reforms," veteran reporter Jay Mathews puts forth an argument that one hears rather frequently – that teachers are “changing their minds," in a favorable direction, about the current wave of education reform. Among other things, Mr. Mathews cites two teacher surveys. One of them, which we discussed here, is a single-year survey that doesn't actually look at trends, and therefore cannot tell us much about shifts in teachers’ attitudes over time (it was also a voluntary online survey).

    His second source, on the other hand, is in fact a useful means of (cautiously) assessing such trends (though the article doesn't actually look at them). That is the Education Sector survey of a nationally-representative sample of U.S. teachers, which they conducted in 2003, 2007 and, most recently, in 2011.

    This is a valuable resource. Like other teacher surveys, it shows that educators’ attitudes toward education policy are diverse. Opinions vary by teacher characteristics, context and, of course, by the policy being queried. Moreover, views among teachers can (and do) change over time, though, when looking at cross-sectional surveys, one must always keep in mind that observed changes (or lack thereof) might be due in part to shifts in the characteristics of the teacher workforce. There's an important distinction between changing minds and changing workers (which Jay Mathews, to his great credit, discusses in this article).*

    That said, when it comes to the many of the more controversial reforms happening in the U.S., those about which teachers might be "changing their minds," the results of this particular survey suggest, if anything, that teachers’ attitudes are actually quite stable.

    READ MORE
  • NCLB And The Institutionalization Of Data Interpretation

    Written on October 10, 2012

    It is a gross understatement to say that the No Child Left Behind (NCLB) law is, was – and will continue to be – a controversial piece of legislation. Although opinion tends toward the negative, there are certain features, such as a focus on student subgroup data, that many people support. And it’s difficult to make generalizations about whether the law’s impact on U.S. public education was “good” or “bad” by some absolute standard.

    The one thing I would say about NCLB is that it has helped to institutionalize the improper interpretation of testing data.

    Most of the attention to the methodological shortcomings of the law focuses on “adequate yearly progress” (AYP) – the crude requirement that all schools must make “adequate progress” toward the goal of 100 percent proficiency by 2014. And AYP is indeed an inept measure. But the problems are actually much deeper than AYP.

    Rather, it’s the underlying methods and assumptions of NCLB (including AYP) that have had a persistent, negative impact on the way we interpret testing data.

    READ MORE
  • Our Not-So-College-Ready Annual Discussion Of SAT Results

    Written on October 1, 2012

    Every year, around this time, the College Board publicizes its SAT results, and hundreds of newspapers, blogs, and television stations run stories suggesting that trends in the aggregate scores are, by themselves, a meaningful indicator of U.S. school quality. They’re not.

    Everyone knows that the vast majority of the students who take the SAT in a given year didn’t take the test the previous year – i.e., the data are cross-sectional. Everyone also knows that participation is voluntary (as is participation in the ACT test), and that the number of students taking the test has been increasing for many years and current test-takers have different measurable characteristics from their predecessors. That means we cannot use the raw results to draw strong conclusions about changes in the performance of the typical student, and certainly not about the effectiveness of schools, whether nationally or in a given state or district. This is common sense.

    Unfortunately, the College Board plays a role in stoking the apparent confusion - or, at least, they could do much more to prevent it. Consider the headline of this year’s press release:

    READ MORE
  • Five Recommendations For Reporting On (Or Just Interpreting) State Test Scores

    Written on September 4, 2012

    From my experience, education reporters are smart, knowledgeable, and attentive to detail. That said, the bulk of the stories about testing data – in big cities and suburbs, in this year and in previous years – could be better.

    Listen, I know it’s unreasonable to expect every reporter and editor to address every little detail when they try to write accessible copy about complicated issues, such as test data interpretation. Moreover, I fully acknowledge that some of the errors to which I object – such as calling proficiency rates “scores” – are well within tolerable limits, and that news stories need not interpret data in the same way as researchers. Nevertheless, no matter what you think about the role of test scores in our public discourse, it is in everyone’s interest that the coverage of them be reliable. And there are a few mostly easy suggestions that I think would help a great deal.

    Below are five such recommendations. They are of course not meant to be an exhaustive list, but rather a quick compilation of points, all of which I’ve discussed in previous posts, and all of which might also be useful to non-journalists.

    READ MORE

Pages

Subscribe to Education Reporting

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.