Human peripheral blur is optimal for object recognition

Vision Res. 2022 Nov:200:108083. doi: 10.1016/j.visres.2022.108083. Epub 2022 Jul 10.

Abstract

Our vision is sharpest at the centre of our gaze and becomes progressively blurry into the periphery. It is widely believed that this high foveal resolution evolved at the expense of peripheral acuity. But what if this sampling scheme is actually optimal for object recognition? To test this hypothesis, we trained deep neural networks on "foveated" images mimicking how our eyes sample the visual field: objects (wherever they were in the image) were sampled at high resolution, and their surroundings were sampled with decreasing resolution away from the objects. Remarkably, networks trained with the known human peripheral blur profile yielded the best performance compared to networks trained on shallower and steeper blur profiles, and compared to baseline state-of-the-art networks trained on full resolution images. This improvement, although slight, is noteworthy since the state-of-the-art networks are already trained to saturation on these datasets. When we tested human subjects on object categorization, their accuracy deteriorated only for steeper blur profiles, which is expected since they already have peripheral blur in their eyes. Taken together, our results suggest that blurry peripheral vision may have evolved to optimize object recognition rather than merely due to wiring constraints.

Keywords: Blur perception; Convolutional neural networks; Deep learning; Deep neural networks; Object recognition; Peripheral vision.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Fovea Centralis
  • Humans
  • Neural Networks, Computer
  • Pattern Recognition, Visual*
  • Visual Fields
  • Visual Perception*