Skip to main content

Researchers want a better system for fixing bad science

Researchers want a better system for fixing bad science

/

How should authors and journals acknowledge mistakes?

Share this story

Part of the scientific process is getting things wrong, but another is making sure that bad results don't percolate into the larger world of research. When a flawed study slips through peer review, journals and authors are responsible for correcting or retracting it as soon as possible, ideally stopping future researchers from citing its results. In a commentary published today in Nature, though, University of Alabama biostatistics professor David Allison and three colleagues are calling for scrutiny of how that process works — and whether it's actually working right now.

Over the past 18 months, Allison and his colleagues collected around three dozen apparent errors that they came across while keeping up with the literature in their fields. The problems were largely a combination of miscalculations, randomized trials that weren't actually random, and comparison methods that made false positives much more likely. After putting in requests to fix 25 of them, "we had to stop," they say. "The work took too much of our time."

In the time it takes to retract invalid research, it's making its way into the footnotes of other papers

The Committee on Publication Ethics Guidelines (COPE) — whose members include Nature, Science, The New England Journal of Medicine, and many other journals — has a detailed guide for when and how editors and authors should retract a publication. Projects like the Retraction Watch blog call attention to these changes, hopefully stopping retracted papers from slipping into the footnotes of other studies. But to Allison and others, getting a correction or retraction is difficult. "Many journal editors and staff members," they write, "seemed unprepared or ill-equipped to investigate, take action, or even respond." As a result, they say, a study that raises major red flags can take months to examine, and longer to correct or retract.

The problems include things as simple as being charged a submission fee to publish an expression of concern, or not knowing whether to send corrections to a study's author or its publisher. Journal policies vary, and while some are concrete — The Journal of the American Medical Association asks readers to send requests to its editorial office, for example — others left Allison and his colleagues guessing. In one case, the team requested a retraction within two weeks and got the request approved after 11 months, with more time needed for the actual retraction. (Allison has not named journals that are still in the process of publishing his comments.) In another, they report waiting 15 months — and counting — for a substantive response from one paper's authors. BMC Public Health, which published the original piece, declined to comment on its status.

"A small team of investigators could find dozens of problematic papers."

This is far from a scientific survey of the publishing world, so it's impossible to say whether these cases are representative, how common errors are in general, or even how they vary across publications. A study published in 2008 suggested that retraction rates, while still "very low," had increased in the past decade, with a median time of 15.5 months between papers' initial publication and retraction. "If one were to look within journals, things are probably getting better over time," Allison speculates. The top-tier journal Science tells The Verge that it accepts roughly 980 peer-reviewed submissions a year, and around two dozen are "clarified or otherwise corrected" after concerns are raised. "At the same time, there are ever more journals," Allison says — ones that may still be developing a rigorous review process, or have fewer dedicated staff to review corrections.

Now, he's hoping to jump-start a more careful study of the field. "We showed that a small team of investigators with expertise in statistics and experimental design could find dozens of problematic papers," they write. A more formal survey could help figure out the real scope of the problem, and how to fix it, they say. Tentatively, he and his colleagues have recommended making full data sets more easily accessible, creating a clear process for corrections, and asking experts to scrutinize statistics before they make it into print.

"The system of communication about correcting the scientific record is broken. More accurately, it has never worked."

Allison and his colleagues' concerns dovetail with a larger debate over how reliable scientific studies really are. In 2015, University of Virginia psychology professor Brian Nosek coordinated a massive review of existing studies in his field. The results were disappointing: out of 100 published experiments, researchers could only replicate the results of about one-third. It's hard to draw concrete conclusions from this, and there are many potentially complicating factors. But it's been enough to raise questions. "I think it's very much tied in," Allison says of his work.

Nosek, in fact, has similar criticisms. "Publishing a correction takes a lot of time. The corrections are difficult to discover. And researchers continue using and citing the original work without awareness of the correction," he says. "The system of communication about correcting the scientific record is broken. More accurately, it has never worked." An organization he co-founded, the Center for Open Science, recently partnered with Retraction Watch to support and expand its database — making it easier to find and fix bad citations, among other things.

But it's an answer that still arguably works around authors and journals, not with them — and changing that might be difficult. Allison hopes that the suggestions he offers will help some. But he admits that they're still figuring out a solution that fits everyone. "For a JAMA, for a Science, to implement some of these things, I'm sure would be a little more work, but not too onerous," he says. "But for a journal that has close to no staff above and beyond the editors — who are part-timers volunteering out of their academic jobs — it would be difficult."