Rethinking Pretraining as a Bridge From ANNs to SNNs

IEEE Trans Neural Netw Learn Syst. 2022 Nov 14:PP. doi: 10.1109/TNNLS.2022.3217796. Online ahead of print.

Abstract

Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.