Causality Rules Everything Around Me

In a Slate article published last October, Daniel Engber bemoans the frequently shallow use of the classic warning that “correlation does not imply causation." Mr. Engber argues that the correlation/causation distinction has become so overused in online comments sections and other public fora as to hinder real debate. He also posits that correlation does not mean causation, but “it sure as hell provides a hint," and can “set us down the path toward thinking through the workings of reality."

Correlations are extremely useful, in fact essential, for guiding all kinds of inquiry. And Engber is no doubt correct that the argument is overused in public debates, often in lieu of more substantive comments. But let’s also be clear about something – careless causal inferences likely do more damage to the quality and substance of policy debates on any given day than the misuse of the correlation/causation argument does over the course of months or even years.

We see this in education constantly. For example, mayors and superintendents often claim credit for marginal increases in testing results that coincide with their holding office. The causal leaps here are pretty stunning.

Not only do these individuals assume, usually without a shred of evidence, that the trends represent real progress rather than compositional change, but they then take this even further to infer that it is their policies (or merely their presence) that caused the increases, rather than all the other policies, people and external factors, past and present, that contribute to children’s measured performance.

This, of course, is just one example among many. We are prone to dropping our causal guards with almost any measurable outcome, including teacher retention rates, NAEP scores, school spending and attitude surveys. In fact, to some degree, NCLB seems to have been “inspired” by shaky connections between policy changes and trends in outcomes in a handful of states, most notably Texas.

Not all of these examples represent correlations in the statistical sense of the term, but they are all forms of the classic “correlation/causation” conflation: The relationship between two observable variables is the sole basis for asserting a cause-and-effect relationship between them.

And, obviously, the premature planting of causal flags is not at all limited to education. As just one more example, much of our national debate over the economy is driven by crude inference.

How do we assess the merit of economic policies? All too often, we do so by looking at unadjusted outcomes immediately after the implementation of those policies or, even worse, while the people who support them are in office.

For instance, governors routinely take and receive credit (or blame) for trends in their states’ unemployment rates, despite the fact that these outcomes are subject to influence from countless different factors – the majority of which (including measurement inconsistencies) are outside the control of any governor.

Whether it's unemployment or educational outcomes, we often fail to ask a key question: What would have happened under different leadership, and/or a different set of policies?

This is called the “unobserved counterfactual," which is a fancy way of saying “what didn’t happen." It's sometimes not possible to even tentatively address this question, but one big reason we have policy analysis in the first place is because we don't want to assume it away.

Without a doubt, people in office affect policies, and policies affect outcomes. But they’re not the only things that do, and individual policies are often far less influential than we think.

So, yes, correlations put us on the path to bigger and better things. They play a critical role in guiding the policy research process, as well as our daily lives. And even the most sophisticated attempts to isolate causality are subject to serious imprecision.

But if there is even the slightest chance that the "correlation is not causation" warning will remind people (myself included) that causality is complex and should never be taken lightly, I for one am more than willing to tolerate a few annoying people in comments sections.

- Matt Di Carlo


You left out the problem of getting the causality backwards.


Did anyone else get this?
In education, the "unobserved counterfactual" can sometimes cause feelings of inadequacy. I remember reading SES applicatons and the provider would state as their SBR that student growth on an intervention was (for example), half a year. Never mind that there wasn't a comparison group for the counterfactual, but even without that, I know that half a years growth is what I would expect from their regular instruction. I expect an intervention to produce MORE significant growth than regular instruction!