Science —

Publishing stings find predatory journals, shoddy peer review

Plagiarized gibberish? Some journals will happily publish that—for a fee.

With the rise of digital publishing, running a fake journal for profit has become a viable business model.
With the rise of digital publishing, running a fake journal for profit has become a viable business model.

Peer-reviewed scientific papers are the gold standard for research. Although the review system has its limitations, it ostensibly ensures that some qualified individuals have looked over the science of the paper and found that it's solid. But lately there have been a number of cases that raise questions about just how reliable at least some of that research is.

The first issue was highlighted by a couple of sting operations performed by Science magazine and the Ottawa Citizen. In both cases, a staff writer made up some obviously incoherent research. In the Citizen's example, the writer randomly merged plagiarized material from previously published papers in geology and hematology. The sting paper's graphs came out of a separate paper on Mars, while its references came from one on wine chemistry. Neither the named author nor the institution he ostensibly worked at existed.

Yet in less than 24 hours, offers started coming in to publish the paper, some for as little as $500. Others offered to expedite publishing (at a speed that could not possibly allow for any peer review) for additional costs. The journals in this case are scams. Without the expense of real editors and peer review, they charge the authors fees and spend only a pittance to format the paper and drop it on a website. The problem is that it can be difficult to tell these journals from the real things.

The Science sting was perhaps more disturbing, since a number of the journals taken in by an equally nonsensical paper are supposedly serious academic outlets. Although the Science sting focused on open access journals, the problem it highlighted probably extends into other journals as well: weak editorial oversight and limited, shoddy peer review.

Issues with peer review can be problems in solid journals. For some of the large, cross-discipline studies that are increasingly popular, it can be tough to find reviewers who have all the relevant expertise to evaluate the different fields of science the papers contain. The result can be the publication of something that has solid biology but ludicrously bad chemistry, to use an example that was highlighted by blogger Derek Lowe. Another problem is the intense pressure that people in many fields (including all the biological sciences) are experiencing right now, which probably limits the amount of attention that reviewers can spare.

But the Science sting suggests that for at least some lower-profile journals, the attention paid by the reviewers is minimal or non-existent. Otherwise, there's no reason that something like a deranged theory of everything should ever find its way to a journal. This sort of shoddy review is a problem that only scientists themselves can fix.

Unfortunately, by attempting to highlight the problem of lax review procedures, some computer scientists may have exacerbated the problem. Suspecting that some reviewers weren't doing a thorough job on some conference papers, they put together a random gibberish paper generator for anyone who wanted to test whether reviewers were paying attention. Unfortunately, that software has since been used to get 120 pieces of gibberish published.

None of this is to say that there is a complete crisis in peer review. At the higher-profile journals with reputations to protect, most of the research is likely to be reliable (with interdisciplinary work being a potential exception). But it should certainly raise an added level of caution about some of the work that is published in the more obscure or overly specialized journals that have popped up in recent years.

Channel Ars Technica