Learning With Proper Partial Labels

Neural Comput. 2022 Dec 14;35(1):58-81. doi: 10.1162/neco_a_01554.

Abstract

Partial-label learning is a kind of weakly supervised learning with inexact labels, where for each training example, we are given a set of candidate labels instead of only one true label. Recently, various approaches on partial-label learning have been proposed under different generation models of candidate label sets. However, these methods require relatively strong distributional assumptions on the generation models. When the assumptions do not hold, the performance of the methods is not guaranteed theoretically. In this letter, we propose the notion of properness on partial labels. We show that this proper partial-label learning framework requires a weaker distributional assumption and includes many previous partial-label learning settings as special cases. We then derive a unified unbiased estimator of the classification risk. We prove that our estimator is risk consistent, and we also establish an estimation error bound. Finally, we validate the effectiveness of our algorithm through experiments.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*