A Unifying Probabilistic Framework for Partially Labeled Data Learning

IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8036-8048. doi: 10.1109/TPAMI.2022.3228755. Epub 2023 Jun 5.

Abstract

Partially labeled data learning (PLDL), including partial label learning (PLL) and partial multi-label learning (PML), has been widely used in nowadays data science. Researchers attempt to construct different specific models to deal with the different classification tasks for PLL and PML scenarios respectively. The main challenge in training classifiers for PLL and PML is how to deal with ambiguities caused by the noisy false-positive labels in the candidate label set. The state-of-the-art strategy for both scenarios is to perform disambiguation by identifying the ground-truth label(s) directly from the candidate label set, which can be summarized into two categories: 'the identifying method' and 'the embedding method'. However, both kinds of methods are constructed by hand-designed heuristic modeling under considerations like feature/label correlations with no theoretical interpretation. Instead of adopting heuristic or specific modeling, we propose a novel unifying framework called A Unifying Probabilistic Framework for Partially Labeled Data Learning (UPF-PLDL), which is derived from a clear probabilistic formulation, and brings existing research on PLL and PML under one theoretical interpretation with respect to information theory. Furthermore, the proposed UPF-PLDL also unifies 'the identifying method' and 'the embedding method' into one integrated framework, which naturally incorporates the feature and label correlation considerations. Comprehensive experiments on synthetic and real-world datasets for both PLL and PML scenarios clearly demonstrate the superiorities of the derived framework.