How to handle noisy labels for robust learning from uncertainty

Neural Netw. 2021 Nov:143:209-217. doi: 10.1016/j.neunet.2021.06.012. Epub 2021 Jun 12.

Abstract

Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These noisy labels cause the performance degradation of DNNs due to the memorization effect by over-fitting. Earlier state-of-the-art methods used small loss tricks to efficiently resolve the robust training problem with noisy labels. In this paper, relationship between the uncertainties and the clean labels is analyzed. We present novel training method to use not only small loss trick but also labels that are likely to be clean labels selected from uncertainty called "Uncertain Aware Co-Training (UACT)". Our robust learning techniques (UACT) avoid over-fitting the DNNs by extremely noisy labels. By making better use of the uncertainty acquired from the network itself, we achieve good generalization performance. We compare the proposed method to the current state-of-the-art algorithms for noisy versions of MNIST, CIFAR-10, CIFAR-100, T-ImageNet and News to demonstrate its excellence.

Keywords: Deep network; Noisy labels; Robust learning; Uncertain aware joint training.

MeSH terms

  • Algorithms*
  • Neural Networks, Computer*
  • Uncertainty