Prototype-Guided Memory Replay for Continual Learning

IEEE Trans Neural Netw Learn Syst. 2023 Mar 3:PP. doi: 10.1109/TNNLS.2023.3246049. Online ahead of print.

Abstract

Continual learning (CL) is a machine learning paradigm that accumulates knowledge while learning sequentially. The main challenge in CL is catastrophic forgetting of previously seen tasks, which occurs due to shifts in the probability distribution. To retain knowledge, existing CL models often save some past examples and revisit them while learning new tasks. As a result, the size of saved samples dramatically increases as more samples are seen. To address this issue, we introduce an efficient CL method by storing only a few samples to achieve good performance. Specifically, we propose a dynamic prototype-guided memory replay (PMR) module, where synthetic prototypes serve as knowledge representations and guide the sample selection for memory replay. This module is integrated into an online meta-learning (OML) model for efficient knowledge transfer. We conduct extensive experiments on the CL benchmark text classification datasets and examine the effect of training set order on the performance of CL models. The experimental results demonstrate the superiority our approach in terms of accuracy and efficiency.