Multilabel Ranking With Inconsistent Rankers

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5211-5224. doi: 10.1109/TPAMI.2021.3070709. Epub 2022 Aug 4.

Abstract

While most existing multilabel ranking methods assume the availability of a single objective label ranking for each instance in the training set, this paper deals with a more common case where only subjective inconsistent rankings from multiple rankers are associated with each instance. Two ranking methods are proposed from the perspective of instances and rankers, respectively. The first method, Instance-oriented Preference Distribution Learning (IPDL), is to learn a latent preference distribution for each instance. IPDL generates a common preference distribution that is most compatible to all the personal rankings, and then learns a mapping from the instances to the preference distributions. The second method, Ranker-oriented Preference Distribution Learning (RPDL), is proposed by leveraging interpersonal inconsistency among rankers, to learn a unified model from personal preference distribution models of all rankers. These two methods are applied to natural scene images dataset and 3D facial expression dataset BU_3DFE. Experimental results show that IPDL and RPDL can effectively incorporate the information given by the inconsistent rankers, and perform remarkably better than the compared state-of-the-art multilabel ranking algorithms.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Learning*