Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme

IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):7910-7920. doi: 10.1109/TNNLS.2022.3147031. Epub 2023 Oct 5.

Abstract

Improving the generalization performance of deep neural networks (DNNs) trained by minibatch stochastic gradient descent (SGD) has raised lots of concerns from deep learning practitioners. The standard simple random sampling (SRS) scheme used in minibatch SGD treats all training samples equally in gradient estimation. In this article, we study a new data selection method based on the intrinsic property of the training set to help DNNs have better generalization performance. Our theoretical analysis suggests that this new sampling scheme, called the nontypicality sampling scheme, boosts the generalization performance of DNNs through biasing the solution toward wider minima, under certain assumptions. We confirm our findings experimentally and show that more variants of minibatch SGD can also benefit from the new sampling scheme. Finally, we discuss an extension of the nontypicality sampling scheme that holds promise to enhance both generalization performance and convergence speed of minibatch SGD.