Learnable Distribution Calibration for Few-Shot Class-Incremental Learning

IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12699-12706. doi: 10.1109/TPAMI.2023.3273291. Epub 2023 Sep 5.

Abstract

Few-shot class-incremental learning (FSCIL) faces the challenges of memorizing old class distributions and estimating new class distributions given few training samples. In this study, we propose a learnable distribution calibration (LDC) approach, to systematically solve these two challenges using a unified framework. LDC is built upon a parameterized calibration unit (PCU), which initializes biased distributions for all classes based on classifier vectors (memory-free) and a single covariance matrix. The covariance matrix is shared by all classes, so that the memory costs are fixed. During base training, PCU is endowed with the ability to calibrate biased distributions by recurrently updating sampled features under supervision of real distributions. During incremental learning, PCU recovers distributions for old classes to avoid 'forgetting', as well as estimating distributions and augmenting samples for new classes to alleviate 'over-fitting' caused by the biased distributions of few-shot samples. LDC is theoretically plausible by formatting a variational inference procedure. It improves FSCIL's flexibility as the training procedure requires no class similarity priori. Experiments on CUB200, CIFAR100, and mini-ImageNet datasets show that LDC respectively outperforms the state-of-the-arts by 4.64%, 1.98%, and 3.97%. LDC's effectiveness is also validated on few-shot learning scenarios.