Randomly auditing research labs could be an affordable way to improve research quality: A simulation study

PLoS One. 2018 Apr 12;13(4):e0195613. doi: 10.1371/journal.pone.0195613. eCollection 2018.

Abstract

The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Laboratories*
  • Models, Theoretical*
  • Publications / standards
  • Quality Control
  • Reference Standards
  • Research / standards*

Grants and funding

Adrian Barnett and Nicholas Graves are supported by Australian National Health Medical Research Council Senior Research Fellowships (APP1117784 and APP1059565; http://www.nhmrc.gov.au/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.