Lifelong learning with Shared and Private Latent Representations learned through synaptic intelligence

Neural Netw. 2023 Jun:163:165-177. doi: 10.1016/j.neunet.2023.04.005. Epub 2023 Apr 11.

Abstract

This paper explores a novel lifelong learning method with Shared and Private Latent Representations (SPLR), which are learned through synaptic intelligence. To solve a sequence of tasks, by considering the entire parameter learning trajectory, SPLR can learn task-invariant representation which changes little, and task-specific features that change greatly along the entire parameter updating trajectory. Therefore, in the lifelong learning scenarios, our model can obtain a task-invariant structure shared by all tasks and also contain some private properties that are task-specific to each task. To reduce the parameter quantity, a ℓ1 regularization to promote sparsity is employed in the weights. We use multiple datasets under lifelong learning scenes to verify our SPLR, on these datasets it can get comparable performance compared with existing lifelong learning approaches, and learn a sparse network which means fewer parameters while requiring less model training time.

Keywords: Entire learning trajectory; Lifelong learning; Shared and Private Latent Representations; Synaptic Intelligence; Task-invariant; Task-specific.

MeSH terms

  • Education, Continuing
  • Intelligence
  • Learning*
  • Machine Learning*