Can editors save peer review from peer reviewers?
- Published
- Accepted
- Subject Areas
- Ethical Issues, Science Policy
- Keywords
- Peer review, Referees, Rational cheating, Quality of publications, Journal editors, Referee behavior
- Copyright
- © 2017 D'Andrea et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2017. Can editors save peer review from peer reviewers? PeerJ Preprints 5:e3005v4 https://doi.org/10.7287/peerj.preprints.3005v4
Abstract
Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.
Author Comment
This is a revision after the first round of peer review.
Background and Discussion are expanded. More detailed discussion of the limitations of blacklisting.
New supplementary figures extend the results in the main text to three referees as opposed to two, and lognormal distribution of quality as opposed to normal.
A new type of referee is included who rejects papers that are too different from their own.
Correction to previous result: bypassing referees is more effective than previously reported.
Supplemental Information
Appendix S1: Blacklisting referees
We provide the mathematical details of the editorial strategy of blacklisting referees with a high record of disagreements.
Figure S1: Effect of narcissistic referees
Narcissists accept only manuscripts that are similar enough to their own work to fall within the quality interval covering 95\% of their own scientific production. These are meant to represent referees with a (conscious or unconscious) bias towards endorsing the relevance/importance of manuscripts on their subfield of expertise. Here we plot the effect of narcissistic referees on the quality of accepted (\textbf{A}) and rejected (\textbf{B}, \textbf{C}) papers, as a function of their percentage in the referee pool (the remainder being moving-standard impartial referees). For comparison, we also plot the effect of indifferent selfish referees (described in the main text).
Figure S2: Two versus three referees
Average quality of accepted papers when two (\textbf{A}) and three (\textbf{B}) referees are assigned per manuscript, in concert with each editorial strategy tested in this study. Outcomes are qualitatively similar but quantitatively different. Three referees leads to better results overall (\textbf{C}), although not by very large percentage points, and the advantage declines with higher incidence of selfish referees in the pool, even reversing in some cases. $Q_{2(3)}$ is the average quality of accepted papers under 2 (3) referees. Under three referees the editor always honors the majority vote, unless dictated otherwise by the editorial strategy at hand.
Figure S3: Normal versus lognormal quality distribution
Average quality of accepted and rejected papers under normal (\textbf{A, B, C}) and lognormal (\textbf{D, E, F}) distribution of proficiency across authors and quality across a given author's works. No editorial action considered. A normal distribution follows if manuscript quality is the end result of multiple random additive factors. A lognormal distribution occurs under multiplicative random factors. Comparison between the top and bottom rows indicates that our results are robust to relaxing the assumption of normality. Parameters: mean author proficiency 100 (normal, lognormal); standard deviation of proficiency 10 (normal), 0.5 (lognormal); standard deviation of quality per author's works 5 (normal), 0.5 (lognormal).