Weeks before the fact, a Sept. 29 forum sponsored by the Economic Policy Institute and the National Education Policy Center has sparked some interesting debate over at the National Journal. The event, centered around the recent book, Think Tank Research Quality: Lessons for Policy Makers, the Media and the Public, is an effort to separate "the junk research from the science."
The crux of the debate is whether the recent explosion of self-published reports by various educational think tanks has helped or hindered the effort to improve the quality of educational research. (Full disclosure: The Albert Shanker Institute is often called a "think tank" and we frequently self-publish.) The push and pull of dueling experts and conflicting reports, say some, has turned education research into a political football—moved down the field by one faction, only to be punted all the way to the other end by a rival faction—each citing "research" as their guide.
"My research says this works and that doesn’t," can always be countered by, "Oh yeah, well my research says that works and this doesn’t." There are even arguments about what "what works" means because, except for performance on standardized tests, our goals remain diverse, decentralized and subject to local control. As a result, public education is plagued by trial and error policies that rise and fall, district by district and state by state, like some sort of crazed popularity contest.
For policymakers, who usually have no time to wade through the actual studies, analyzing all of the various assumptions and data sets, poring over the results tables and identifying the underlying assumptions of the models, the result is often a rush to judgment based on, well…, political expedience: Which policies do my friends and supporters like? It should come as no surprise, then, that informed, cold-eyed skeptics of educational policy find their views relegated to the margins of decision-making, with center stage too often filled by fads driven by bias, prejudice, and ideology.
This must be so. Otherwise, how could the Obama Administration have spent close to $5 billion in its "reformist" Race to the Top (RTTT) program, leaning hard on states to change their laws to favor Administration-backed favorites that are unsupported by the weight of evidence (expanding charter schools, test-score based teacher evaluations, performance-based pay, etc.). Some of the most respected researchers in the country have raised serious questions about these initiatives. (See the National Academy of Sciences; the Economic Policy Institute; Dan Willingham’s critique; and also Koretz, Murnane, Barton, Ladd, Ravitch, etc. from the RTTT submissions, here).
In the National Journal, Rick Hess urges us not to return to the (admittedly limited) system of relying solely on peer-reviewed journals as the arbiter of research quality. He and Sandy Kress also believe there has been a strengthening and expansion of the research review role at the Institute of Education Sciences and its What Works Clearinghouse under the tenure of Russ Whitehurst.
The work of the National Academies of Sciences and the National Research Council also deserves credit and merits growing optimism. Under their umbrella, groups of the nation’s most respected research scientists have been gathered to review the research and publish recommendations on key issues, such as beginning reading instruction (here), early childhood development and educations (here), the current utility of value-added teacher evaluation methods (here), and so on. But these, too, amount to drops in the bucket when compared to the need for a source of informed, impartial information about "what the research says" in education.
What can be done? I believe that the Obama administration’s initiatives on medical research may offer a useful model that education research could strive to follow. As the lead editorial in the September 11 New York Times observes, "Research that systematically compares the effectiveness of different treatments…is clearly needed." And, unlike in education, where policy seems to be guiding research and not the other way around, this effort is actually backed by the administration, which is given credit for having "started the process." Specifically, the administration has committed "$1.1 billion from stimulus funds to finance comparative studies."
The new reform law will also set up something unlike anything we have in education— "a nonprofit, independent institute to organize the work." It empowers the comptroller general to appoint a governing board made up of 19 members, including all stakeholders and two federal health officials, but which will hopefully be dominated by a group of highly professional and impartial scientists.
Whether the government creates its own watchdog entity or the research and education communities themselves decide to step up to the plate, a similar entity to ensure the quality of education research is sorely needed. The tragedy before us is that we know so much more about how to deliver good education to students, but virtually nothing is done by government, educators, and researchers themselves to help well-meaning policy leaders separate the wheat from the chaff.