Uncertainty Estimation With Neural Processes for Meta-Continual Learning

IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):6887-6897. doi: 10.1109/TNNLS.2022.3215633. Epub 2023 Oct 5.

Abstract

The ability to evaluate uncertainties in evolving data streams has become equally, if not more, crucial than building a static predictor. For instance, during the pandemic, a model should consider possible uncertainties such as governmental policies, meteorological features, and vaccination schedules. Neural process families (NPFs) have recently shone a light on predicting such uncertainties by bridging Gaussian processes (GPs) and neural networks (NNs). Their abilities to output average predictions and the acceptable variances, i.e., uncertainties, made them suitable for predictions with insufficient data, such as meta-learning or few-shot learning. However, existing models have not addressed continual learning which imposes a stricter constraint on the data access. Regarding this, we introduce a member meta-continual learning with neural process (MCLNP) for uncertainty estimation. We enable two levels of uncertainty estimations: the local uncertainty on certain points and the global uncertainty p(z) that represents the function evolution in dynamic environments. To facilitate continual learning, we hypothesize that the previous knowledge can be applied to the current task, hence adopt a coreset as a memory buffer to alleviate catastrophic forgetting. The relationships between the degree of global uncertainties with the intratask diversity and model complexity are discussed. We have estimated prediction uncertainties with multiple evolving types including abrupt/gradual/recurrent shifts. The applications encompass meta-continual learning in the 1-D, 2-D datasets, and a novel spatial-temporal COVID dataset. The results show that our method outperforms the baselines on the likelihood and can rebound quickly even for heavily evolved data streams.