Neuronal-Plasticity and Reward-Propagation Improved Recurrent Spiking Neural Networks

Front Neurosci. 2021 Mar 12:15:654786. doi: 10.3389/fnins.2021.654786. eCollection 2021.

Abstract

Different types of dynamics and plasticity principles found through natural neural networks have been well-applied on Spiking neural networks (SNNs) because of their biologically-plausible efficient and robust computations compared to their counterpart deep neural networks (DNNs). Here, we further propose a special Neuronal-plasticity and Reward-propagation improved Recurrent SNN (NRR-SNN). The historically-related adaptive threshold with two channels is highlighted as important neuronal plasticity for increasing the neuronal dynamics, and then global labels instead of errors are used as a reward for the paralleling gradient propagation. Besides, a recurrent loop with proper sparseness is designed for robust computation. Higher accuracy and stronger robust computation are achieved on two sequential datasets (i.e., TIDigits and TIMIT datasets), which to some extent, shows the power of the proposed NRR-SNN with biologically-plausible improvements.

Keywords: neuronal plasticity; reward propagation; sparse connections; spiking neural network; synaptic plasticity.