Noisy Label Learning With Provable Consistency for a Wider Family of Losses

IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13536-13552. doi: 10.1109/TPAMI.2023.3296156. Epub 2023 Oct 3.

Abstract

Deep models have achieved state-of-the-art performance on a broad range of visual recognition tasks. Nevertheless, the generalization ability of deep models is seriously affected by noisy labels. Though deep learning packages have different losses, this is not transparent for users to choose consistent losses. This paper addresses the problem of how to use abundant loss functions designed for the traditional classification problem in the presence of label noise. We present a dynamic label learning (DLL) algorithm for noisy label learning and then prove that any surrogate loss function can be used for classification with noisy labels by using our proposed algorithm, with a consistency guarantee that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample. In addition, we provide a depth theoretical analysis of our algorithm to verify the justifies' correctness and explain the powerful robustness. Finally, experimental results on synthetic and real datasets confirm the efficiency of our algorithm and the correctness of our justifies and show that our proposed algorithm significantly outperforms or is comparable to current state-of-the-art counterparts.