Journalists play an essential role in our society. They are charged with informing the public, a vital function in a representative democracy. Yet, year after year, large pockets of the electorate remain poorly-informed on both foreign and domestic affairs. For a long time, commentators have blamed any number of different culprits for this problem, including poverty, education, increasing work hours and the rapid proliferation of entertainment media.
There is no doubt that these and other factors matter a great deal. Recently, however, there is growing evidence that the factors shaping the degree to which people are informed about current events include not only social and economic conditions, but journalist quality as well. Put simply, better journalists produce better stories, which in turn attract more readers. On the whole, the U.S. journalist community is world class. But there is, as always, a tremendous amount of underlying variation. It’s likely that improving the overall quality of reporters would not only result in higher quality information, but it would also bring in more readers. Both outcomes would contribute to a better-informed, more active electorate.
We at the Shanker Institute feel that it is time to start a public conversation about this issue. We have requested and received datasets documenting the story-by-story readership of the websites of U.S. newspapers, large and small. We are using these data in statistical models that we call “Readers-Added Models," or “RAMs."
Of course, in specifying these models, we had to address many different, often intractable issues.
Perhaps most basically, newspapers vary in terms of their overall readership, which means that a story on the New York Times website is likely to have many more readers than an equally “good” story on a smaller paper’s site. Similarly, some stories are simply more in demand than others, no matter how well-researched and well-written they are. A domestic political story in the middle of an election cycle is likely to get more readers than even the best story about the economy in Europe.
In order to account for these issues, the RAMs focus not on the absolute number of readers, but rather on the number of additional readers - above and beyond that of similar stories in similar publications - that journalists attract by virtue of the (estimated) quality of their reporting.
We calculate "expected readership" in a statistical model that controls for a set of "non-journalist" variables that may influence the number of hits a story gets, including but not limited to: subject matter (using keywords); newspaper section (e.g., national, international, local, etc.); the time, date and day of the week on which the article is posted; publication-level characteristics (e.g., resources); whether or not the story is linked on the paper’s home- and/or section pages; and, when possible, characteristics, such as income and education, of each newspaper’s “average reader."
For example, let’s say a reporter files a story about the role of campaign donations in general elections. Our RAMs compare that story’s readership to the readership of similar stories published by similar outlets at similar times within a given market.
Put simply, the models yield an expectation for how many readers stories like this one tend to get. The difference (proportional) between that statistical expectation and the actual number of readers is attributed to the quality of the journalist. The more stories a reporter has written, the more precise are the estimates.
To reiterate, our model is highly imperfect. We cannot, for example, measure whether a reader fully read a story, or just skimmed it (or visited the story’s page but didn’t read it at all); we do include a variable measuring the average amount of time spent on the page, but this is an imperfect measure.
Nor can we fully account for the fact that many newspapers’ readers still use the paper copy, which precludes our tracking the specific stories they read. We must assume that any systematic differences in the reading patterns of hardcopy versus online users do not bias our estimates to a critical extent.
Another foundational issue is the fact that stories are not randomly assigned to reporters (nor reporters to newspapers). For example, an editor may choose to assign some big stories to his or her top reporters. This would mean that some journalists have an inherent advantage over others. To the degree that’s true, it may generate bias that is not picked up by the other variables (e.g., reporter experience) in the model. The same goes for readers - different publications attract different "types" of regular readers, with varying article-browsing habits.
But our most important assumption is, of course, that readership (pageviews) is an adequate gauge of a story’s quality. Without question, it is incomplete and imperfect. For instance, writing a sensationalized account of a given event is generally considered poor journalistic practice and potentially misleading, even if it might attract more readers. And many exemplary pieces of quality journalism fail to get much attention.
Despite all these issues, we believe that “readers-added," while a noisy measure, does transmit meaningful information about the quality of the reporting. Better reporters will write more compelling, informative stories, which will attract more views. Over the course of many articles, the estimates will provide some signal as to the quality of journalists' work.
Using data we gathered from a group of newspapers in smaller markets, we have calculated a set of preliminary results, which we will release shortly. They indicate that, all else being equal, different journalists writing about the same general topic can attract vastly different numbers of readers.
Combined with other measures of quality, including professional judgment by experienced editors, we believe RAM estimates can eventually serve as a useful tool in ensuring that the American people are receiving the best information possible about the world around them.
- Matt Di Carlo