The relational bottleneck as an inductive bias for efficient abstraction

Trends Cogn Sci. 2024 May 9:S1364-6613(24)00080-9. doi: 10.1016/j.tics.2024.04.001. Online ahead of print.

Abstract

A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.

Keywords: abstraction; inductive biases; neural networks; relations; symbol processing.

Publication types

  • Review