Benchmarking Scientific Image Forgery Detectors

Sci Eng Ethics. 2022 Aug 9;28(4):35. doi: 10.1007/s11948-022-00391-4.

Abstract

The field of scientific image integrity presents a challenging research bottleneck given the lack of available datasets to design and evaluate forensic techniques. The sensitivity of data also creates a legal hurdle that restricts the use of real-world cases to build any accessible forensic benchmark. In light of this, there is no comprehensive understanding on the limitations and capabilities of automatic image analysis tools for scientific images, which might create a false sense of data integrity. To mitigate this issue, we present an extendable open-source algorithm library that reproduces the most common image forgery operations reported by the research integrity community: duplication, retouching, and cleaning. We create a large scientific forgery image benchmark (39,423 images) with enriched ground truth using this library and realistic scientific images. All figures within the benchmark are synthetically doctored using images collected from creative commons sources. While collecting the source images, we ensured that the they did not present any suspicious integrity problems. Because of the high number of retracted papers due to image duplication, this work evaluates the state-of-the-art copy-move detection methods in the proposed dataset, using a new metric that asserts consistent match detection between the source and the copied region. All evaluated methods had a low performance in this dataset, indicating that scientific images might need a specialized copy-move detector. The dataset and source code are available at https://github.com/phillipecardenuto/rsiil .

Keywords: Computational scientific integrity; Image manipulation; Misconduct detection; Scientific integrity benchmark.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Benchmarking*
  • Image Processing, Computer-Assisted
  • Software