Recognizability bias in citizen science photographs

R Soc Open Sci. 2023 Feb 1;10(2):221063. doi: 10.1098/rsos.221063. eCollection 2023 Feb.

Abstract

Citizen science and automated collection methods increasingly depend on image recognition to provide the amounts of observational data research and management needs. Recognition models, meanwhile, also require large amounts of data from these sources, creating a feedback loop between the methods and tools. Species that are harder to recognize, both for humans and machine learning algorithms, are likely to be under-reported, and thus be less prevalent in the training data. As a result, the feedback loop may hamper training mostly for species that already pose the greatest challenge. In this study, we trained recognition models for various taxa, and found evidence for a 'recognizability bias', where species that are more readily identified by humans and recognition models alike are more prevalent in the available image data. This pattern is present across multiple taxa, and does not appear to relate to differences in picture quality, biological traits or data collection metrics other than recognizability. This has implications for the expected performance of future models trained with more data, including such challenging species.

Keywords: citizen science; image recognition; machine learning; recognizability.

Associated data

  • figshare/10.6084/m9.figshare.c.6403470