Deep learning and the Global Workspace Theory

Trends Neurosci. 2021 Sep;44(9):692-704. doi: 10.1016/j.tins.2021.04.005. Epub 2021 May 14.

Abstract

Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep-learning techniques. We propose a roadmap based on unsupervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal Global Latent Workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications.

Keywords: attention; broadcast; consciousness; grounding; latent space; multimodal translation.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Artificial Intelligence
  • Brain
  • Cognition
  • Deep Learning*
  • Humans
  • Neural Networks, Computer