Biased orientation representations can be explained by experience with nonuniform training set statistics

J Vis. 2021 Aug 2;21(8):10. doi: 10.1167/jov.21.8.10.

Abstract

Visual acuity is better for vertical and horizontal compared to other orientations. This cross-species phenomenon is often explained by "efficient coding," whereby more neurons show sharper tuning for the orientations most common in natural vision. However, it is unclear if experience alone can account for such biases. Here, we measured orientation representations in a convolutional neural network, VGG-16, trained on modified versions of ImageNet (rotated by 0°, 22.5°, or 45° counterclockwise of upright). Discriminability for each model was highest near the orientations that were most common in the network's training set. Furthermore, there was an overrepresentation of narrowly tuned units selective for the most common orientations. These effects emerged in middle layers and increased with depth in the network, though this layer-wise pattern may depend on properties of the evaluation stimuli used. Biases emerged early in training, consistent with the possibility that nonuniform representations may play a functional role in the network's task performance. Together, our results suggest that biased orientation representations can emerge through experience with a nonuniform distribution of orientations, supporting the efficient coding hypothesis.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Neural Networks, Computer
  • Neurons
  • Orientation
  • Vision, Ocular
  • Visual Cortex*