Towards More Reliable Confidence Estimation

IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13152-13169. doi: 10.1109/TPAMI.2023.3291676. Epub 2023 Oct 3.

Abstract

As a task that aims to assess the trustworthiness of the model's prediction output during deployment, confidence estimation has received much research attention recently, due to its importance for the safe deployment of deep models. Previous works have outlined two important characteristics that a reliable confidence estimation model should possess, i.e., the ability to perform well under label imbalance and the ability to handle various out-of-distribution data inputs. In this work, we propose a meta-learning framework that can simultaneously improve upon both characteristics in a confidence estimation model. Specifically, we first construct virtual training and testing sets with some intentionally designed distribution differences between them. Our framework then uses the constructed sets to train the confidence estimation model through a virtual training and testing scheme leading it to learn knowledge that generalizes to diverse distributions. Besides, we also incorporate our framework with a modified meta optimization rule, which converges the confidence estimator to flat meta minima. We show the effectiveness of our framework through extensive experiments on various tasks including monocular depth estimation, image classification, and semantic segmentation.