Spiking Deep Residual Networks

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.

Abstract

Spiking neural networks (SNNs) have received significant attention for their biological plausibility. SNNs theoretically have at least the same computational power as traditional artificial neural networks (ANNs). They possess the potential of achieving energy-efficient machine intelligence while keeping comparable performance to ANNs. However, it is still a big challenge to train a very deep SNN. In this brief, we propose an efficient approach to build deep SNNs. Residual network (ResNet) is considered a state-of-the-art and fundamental model among convolutional neural networks (CNNs). We employ the idea of converting a trained ResNet to a network of spiking neurons named spiking ResNet (S-ResNet). We propose a residual conversion model that appropriately scales continuous-valued activations in ANNs to match the firing rates in SNNs and a compensation mechanism to reduce the error caused by discretization. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet 2012 with low latency. This work is the first time to build an asynchronous SNN deeper than 100 layers, with comparable performance to its original ANN.