Associating Latent Representations With Cognitive Maps via Hyperspherical Space for Neural Population Spikes

IEEE Trans Neural Syst Rehabil Eng. 2022:30:2886-2895. doi: 10.1109/TNSRE.2022.3212997. Epub 2022 Oct 20.

Abstract

Recently, there has been a focus on drawing progress on representation learning to obtain more identifiable and interpretable latent representations for spike trains, which helps analyze neural population activity and understand neural mechanisms. Most existing deep generative models adopt carefully designed constraints to capture meaningful latent representations. For neural data involving navigation in cognitive space, based on insights from studies on cognitive maps, we argue that the good representations should reflect such directional nature. Due to manifold mismatch, models utilizing the Euclidean space learn a distorted geometric structure that is difficult to interpret. In the present work, we explore capturing the directional nature in a simpler yet more efficient way by introducing hyperspherical neural latent variable models (SNLVM). SNLVM is an improved deep latent variable model modeling neural activity and behavioral variables simultaneously with hyperspherical latent space. It bridges cognitive maps and latent variable models. We conduct experiments on modeling a static unidirectional task. The results show that while SNLVM has competitive performance, a hyperspherical prior naturally provides more informative and significantly better latent structures that can be interpreted as spatial cognitive maps.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Cognition
  • Humans
  • Learning*
  • Models, Theoretical*