From convolutional neural networks to models of higher-level cognition (and back again)

Ann N Y Acad Sci. 2021 Dec;1505(1):55-78. doi: 10.1111/nyas.14593. Epub 2021 Mar 22.

Abstract

The remarkable successes of convolutional neural networks (CNNs) in modern computer vision are by now well known, and they are increasingly being explored as computational models of the human visual system. In this paper, we ask whether CNNs might also provide a basis for modeling higher-level cognition, focusing on the core phenomena of similarity and categorization. The most important advance comes from the ability of CNNs to learn high-dimensional representations of complex naturalistic images, substantially extending the scope of traditional cognitive models that were previously only evaluated with simple artificial stimuli. In all cases, the most successful combinations arise when CNN representations are used with cognitive models that have the capacity to transform them to better fit human behavior. One consequence of these insights is a toolkit for the integration of cognitively motivated constraints back into CNN training paradigms in computer vision and machine learning, and we review cases where this leads to improved performance. A second consequence is a roadmap for how CNNs and cognitive models can be more fully integrated in the future, allowing for flexible end-to-end algorithms that can learn representations from data while still retaining the structured behavior characteristic of human cognition.

Keywords: categorization; cognitive modeling; convolutional neural networks; similarity; vision.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.
  • Review

MeSH terms

  • Animals
  • Cognition / physiology*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Machine Learning*
  • Neural Networks, Computer*
  • Visual Perception / physiology*