Who Has Confidence In U.S. Schools?

For many years, national survey and polling data have shown that Americans tend to like their own local schools, but are considerably less sanguine about the nation’s education system as a whole. This somewhat paradoxical finding – in which most people seem to think the problem is with “other people’s schools” – is difficult to interpret, especially since it seems to vary a bit when people are given basic information about schools, such as funding levels.

In any case, I couldn’t resist taking a very quick, superficial look at how people’s views of education vary by important characteristics, such as age and education. I used the General Social Survey (pooled 2006-2010), which queries respondents about their confidence in education, asking them to specify whether they have “hardly any," “only some” or “a great deal” of confidence in the system.*

This question doesn’t differentiate explicitly between respondents’ local schools and the system as a whole, and respondents may consider different factors when assessing their confidence, but I think it’s a decent measure of their disposition toward the education system.

Which State Has "The Best Schools?"

** Reprinted here in the Washington Post

I’ve written many times about how absolute performance levels – how highly students score – are not by themselves valid indicators of school quality, since, most basically, they don’t account for the fact that students enter the schooling system at different levels. One of the most blatant (and common) manifestations of this mistake is when people use NAEP results to determine the quality of a state's schools.

For instance, you’ll often hear that Massachusetts has the “best” schools in the U.S. and Mississippi the “worst," with both claims based solely on average scores on the NAEP (though, technically, Massachusetts public school students' scores are statistically tied with at least one other state on two of the four main NAEP exams, while Mississippi's rankings vary a bit by grade/subject, and its scores are also not statistically different from several other states').

But we all know that these two states are very different in terms of basic characteristics such as income, parental education, etc. Any assessment of educational quality, whether at the state or local level, is necessarily complicated, and ignoring differences between students precludes any meaningful comparisons of school effectiveness. Schooling quality is important, but it cannot be assessed by sorting and ranking raw test scores in a spreadsheet.

Our Not-So-College-Ready Annual Discussion Of SAT Results

Every year, around this time, the College Board publicizes its SAT results, and hundreds of newspapers, blogs, and television stations run stories suggesting that trends in the aggregate scores are, by themselves, a meaningful indicator of U.S. school quality. They’re not.

Everyone knows that the vast majority of the students who take the SAT in a given year didn’t take the test the previous year – i.e., the data are cross-sectional. Everyone also knows that participation is voluntary (as is participation in the ACT test), and that the number of students taking the test has been increasing for many years and current test-takers have different measurable characteristics from their predecessors. That means we cannot use the raw results to draw strong conclusions about changes in the performance of the typical student, and certainly not about the effectiveness of schools, whether nationally or in a given state or district. This is common sense.

Unfortunately, the College Board plays a role in stoking the apparent confusion - or, at least, they could do much more to prevent it. Consider the headline of this year’s press release:

The Impact Of Race To The Top Is An Open Question (But At Least It's Being Asked)

You don’t have to look very far to find very strong opinions about Race to the Top (RTTT), the U.S. Department of Education’s (USED) stimulus-funded state-level grant program (which has recently been joined by a district-level spinoff). There are those who think it is a smashing success, while others assert that it is a dismal failure. The truth, of course, is that these claims, particularly the extreme views on either side, are little more than speculation.*

To win the grants, states were strongly encouraged to make several different types of changes, such as adoption of new standards, the lifting/raising of charter school caps, the installation of new data systems and the implementation of brand new teacher evaluations. This means that any real evaluation of the program’s impact will take some years and will have to be multifaceted – that is, it is certain that the implementation/effects will vary not only by each of these components, but also between states.

In other words, the success or failure of RTTT is an empirical question, one that is still almost entirely open. But there is a silver lining here: USED is at least asking that question, in the form of a five-year, $19 million evaluation program, administered through the National Center for Education Evaluation and Regional Assistance, designed to assess the impact and implementation of various RTTT-fueled policy changes, as well as those of the controversial School Improvement Grants (SIGs).

Do Top Teachers Produce "A Year And A Half Of Learning?"

One claim that gets tossed around a lot in education circles is that “the most effective teachers produce a year and a half of learning per year, while the least effective produce a half of a year of learning."

This talking point is used all the time in advocacy materials and news articles. Its implications are pretty clear: Effective teachers can make all the difference, while ineffective teachers can do permanent damage.

As with most prepackaged talking points circulated in education debates, the “year and a half of learning” argument, when used without qualification, is both somewhat valid and somewhat misleading. So, seeing as it comes up so often, let’s very quickly identify its origins and what it means.

Schools Aren't The Only Reason Test Scores Change

In all my many posts about the interpretation of state testing data, it seems that I may have failed to articulate one major implication, which is almost always ignored in the news coverage of the release of annual testing data. That is: raw, unadjusted changes in student test scores are not by themselves very good measures of schools' test-based effectiveness.

In other words, schools can have a substantial impact on performance, but student test scores also increase, decrease or remain flat for reasons that have little or nothing to do with schools. The first, most basic reason is error. There is measurement error in all test scores - for various reasons, students taking the same test twice will get different scores, even if their "knowledge" remains constant. Also, as I've discussed many times, there is extra imprecision when using cross-sectional data. Often, any changes in scores or rates, especially when they’re small in magnitude and/or based on smaller samples (e.g., individual schools), do not represent actual progress (see here and here). Finally, even when changes are "real," other factors that influence test score changes include a variety of non-schooling inputs, such as parental education levels, family's economic circumstances, parental involvement, etc. These factors don't just influence how highly students score; they are also associated with progress (that's why value-added models exist).

Thus, to the degree that test scores are a valid measure of student performance, and changes in those scores a valid measure of student learning, schools aren’t the only suitors at the dance. We should stop judging school or district performance by comparing unadjusted scores or rates between years.

When Push Comes To Pull In The Parent Trigger Debate

The so-called “parent trigger," the policy by which a majority of a school’s parents can decide to convert it to a charter school, seems to be getting a lot of attention lately.

Advocates describe the trigger as “parent empowerment," a means by which parents of students stuck in “failing schools” can take direct action to improve the lives of their kids. Opponents, on the other hand, see it as antithetical to the principle of schools as a public good – parents don’t own schools, the public does. And important decisions such as charter conversion, which will have a lasting impact on the community as a whole (including parents of future students), should not be made by a subgroup of voters.

These are both potentially appealing arguments. In many cases, however, attitudes toward the parent trigger seem more than a little dependent upon attitudes toward charter schools in general. If you strongly support charters, you’ll tend to be pro-trigger, since there’s nothing to lose and everything to gain. If you oppose charter schools, on the other hand, the opposite is likely to be the case. There’s a degree to which it’s not the trigger itself but rather what’s being triggered - opening more charter schools - that’s driving the debate.

The Landmark Case Of Us V. Them

Patrick Riccards, CEO of the education advocacy group ConnCAN, has published a short piece on his personal blog in which he decries the “vicious and fact-free attacks” in education debates.

The post lists a bunch of “if/then” statements to illustrate how market-based reform policy positions are attacked on personal grounds, such as, “If one provides philanthropic support to improve public schools, then one must be a profiteer looking to make personal fortunes off public education." He summarizes the situation with a shot of his own: “Yes, there are no attacks that are too vicious or too devoid of fact for the defenders of the status quo." What of his fellow reformers? They “simply have to stand and take the attacks and the vitriol, no matter how ridiculous."

Mr. Riccards is dead right that name-calling, ascription of base motives, and the abuse of empirical evidence are rampant in education debates. I myself have criticized the unfairness of several of his “if/then” statements, including the accusations of profiteering and equating policy views with being “anti-teacher."

But anyone who thinks that this behavior is concentrated on one “side” or the other must be wearing blinders.

Three Important Distinctions In How We Talk About Test Scores

In education discussions and articles, people (myself included) often say “achievement” when referring to test scores, or “student learning” when talking about changes in those scores. These words reflect implicit judgments to some degree (e.g., that the test scores actually measure learning or achievement). Every once in a while, it’s useful to remind ourselves that scores from even the best student assessments are imperfect measures of learning. But this is so widely understood - certainly in the education policy world, and I would say among the public as well - that the euphemisms are generally tolerated.

And then there are a few common terms or phrases that, in my personal opinion, are not so harmless. I’d like to quickly discuss three of them (all of which I’ve talked about before). All three appear many times every day in newspapers, blogs, and regular discussions. To criticize their use may seem like semantic nitpicking to some people, but I would argue that these distinctions are substantively important and may not be so widely-acknowledged, especially among people who aren’t heavily engaged in education policy (e.g., average newspaper readers).

So, here they are, in no particular order.

Teachers And Their Unions: A Conceptual Border Dispute

One of the segments from “Waiting for Superman” that stuck in my head is the following statement by Newsweek reporter Jonathan Alter:

It’s very, very important to hold two contradictory ideas in your head at the same time. Teachers are great, a national treasure. Teachers’ unions are, generally speaking, a menace and an impediment to reform.
The distinction between teachers and their unions (as well as those of other workers) has been a matter of political and conceptual contention for long time. On one “side," the common viewpoint, as characterized by Alter's slightly hyperbolic line, is “love teachers, don’t like their unions." On the other “side," criticism of teachers’ unions is often called “teacher bashing."

So, is there any distinction between teachers and teachers’ unions? Of course there is.