Learning to represent signals spike by spike

PLoS Comput Biol. 2020 Mar 16;16(3):e1007692. doi: 10.1371/journal.pcbi.1007692. eCollection 2020 Mar.

Abstract

Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Action Potentials / physiology*
  • Computer Simulation*
  • Learning / physiology*
  • Models, Neurological*
  • Nerve Net / physiology
  • Neurons / physiology*

Grants and funding

This work was funded by the James McDonnell Foundation Award, EU grants BACS FP6-IST- 027140, BIND MECT-CT-20095–024831, and ERC FP7-PREDSPIKE to SD, and the Emmy-Noether grant of the Deutsche Forschungsgemeinschaft (Germany) and a Chaire d’excellence of the Agence National de la Recherche (France) to CKM and an FCT scholarship (PD/BD/105944/2014 Ref.a CRM:0022114) to PV. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.