Evaluating Classification Model Against Bayes Error Rate

IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9639-9653. doi: 10.1109/TPAMI.2023.3240194. Epub 2023 Jun 30.

Abstract

For a classification task, we usually select an appropriate classifier via model selection. How to evaluate whether the chosen classifier is optimal? One can answer this question via Bayes error rate (BER). Unfortunately, estimating BER is a fundamental conundrum. Most existing BER estimators focus on giving the upper and lower bounds of the BER. However, evaluating whether the selected classifier is optimal based on these bounds is hard. In this article, we aim to learn the exact BER instead of bounds on BER. The core of our method is to transform the BER calculation problem into a noise recognition problem. Specifically, we define a type of noise called Bayes noise and prove that the proportion of Bayes noisy samples in a data set is statistically consistent with the BER of the data set. To recognize the Bayes noisy samples, we present a method consisting of two parts: selecting reliable samples based on percolation theory and then employing a label propagation algorithm to recognize the Bayes noisy samples based on the selected reliable samples. The superiority of the proposed method compared to the existing BER estimators is verified on extensive synthetic, benchmark, and image data sets.

MeSH terms

  • Algorithms*
  • Bayes Theorem