Generalization of stochastic-resonance-based threshold networks with Tikhonov regularization

Phys Rev E. 2022 Jul;106(1):L012101. doi: 10.1103/PhysRevE.106.L012101.

Abstract

Injecting artificial noise into a feedforward threshold neural network allows it to become trainable by gradient-based methods and also enlarges the parameter space as well as the range of synaptic weights. This configuration constitutes a stochastic-resonance-based threshold neural network, where the noise level can adaptively converge to a nonzero optimal value for finding a local minimum of the loss criterion. We prove theoretically that the injected noise plays the role of a generalized Tikhonov regularizer for training the designed threshold network. Experiments on regression and classification problems demonstrate that the generalization of the stochastic-resonance-based threshold network is improved by the injection of noise. The feasibility of injecting noise into the threshold neural network opens up the potential for adaptive stochastic resonance in machine learning.