Computational Principles of Supervised Learning in the Cerebellum

Annu Rev Neurosci. 2018 Jul 8:41:233-253. doi: 10.1146/annurev-neuro-080317-061948.

Abstract

Supervised learning plays a key role in the operation of many biological and artificial neural networks. Analysis of the computations underlying supervised learning is facilitated by the relatively simple and uniform architecture of the cerebellum, a brain area that supports numerous motor, sensory, and cognitive functions. We highlight recent discoveries indicating that the cerebellum implements supervised learning using the following organizational principles: ( a) extensive preprocessing of input representations (i.e., feature engineering), ( b) massively recurrent circuit architecture, ( c) linear input-output computations, ( d) sophisticated instructive signals that can be regulated and are predictive, ( e) adaptive mechanisms of plasticity with multiple timescales, and ( f) task-specific hardware specializations. The principles emerging from studies of the cerebellum have striking parallels with those in other brain areas and in artificial neural networks, as well as some notable differences, which can inform future research on supervised learning and inspire next-generation machine-based algorithms.

Keywords: Purkinje cell; climbing fiber; consolidation; decorrelation; machine learning; plasticity.

Publication types

  • Research Support, N.I.H., Extramural
  • Review

MeSH terms

  • Algorithms
  • Animals
  • Cerebellum / cytology
  • Cerebellum / physiology*
  • Humans
  • Models, Neurological*
  • Nerve Net / physiology*
  • Neuronal Plasticity / physiology
  • Neurons / physiology*
  • Supervised Machine Learning*
  • Time Factors