A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks

Neural Netw. 2023 Aug:165:164-174. doi: 10.1016/j.neunet.2023.05.038. Epub 2023 May 24.

Abstract

Spiking Neural Network (SNN) has been recognized as the third generation of neural networks. Conventionally, a SNN can be converted from a pre-trained Artificial Neural Network (ANN) with less computation and memory than training from scratch. But, these converted SNNs are vulnerable to adversarial attacks. Numerical experiments demonstrate that the SNN trained by optimizing the loss function will be more adversarial robust, but the theoretical analysis for the mechanism of robustness is lacking. In this paper, we provide a theoretical explanation by analyzing the expected risk function. Starting by modeling the stochastic process introduced by the Poisson encoder, we prove that there is a positive semidefinite regularizer. Perhaps surprisingly, this regularizer can make the gradients of the output with respect to input closer to zero, thus resulting in inherent robustness against adversarial attacks. Extensive experiments on the CIFAR10 and CIFAR100 datasets support our point of view. For example, we find that the sum of squares of the gradients of the converted SNNs is 13∼160 times that of the trained SNNs. And, the smaller the sum of the squares of the gradients, the smaller the degradation of accuracy under adversarial attack.

Keywords: ANN-to-SNN conversion; Adversarial robustness; Gradient; Regularization; Spiking neural networks.

MeSH terms

  • Neural Networks, Computer*