Tf-GCZSL: Task-free generalized continual zero-shot learning

Neural Netw. 2022 Nov:155:487-497. doi: 10.1016/j.neunet.2022.08.034. Epub 2022 Sep 6.

Abstract

Learning continually from a stream of training data or tasks with an ability to learn the unseen classes using a zero-shot learning framework is gaining attention in the literature. It is referred to as continual zero-shot learning (CZSL). Existing CZSL requires clear task-boundary information during training which is not practically feasible. This paper proposes a task-free generalized CZSL (Tf-GCZSL) method with short-term/long-term memory to overcome the requirement of task-boundary in training. A variational autoencoder (VAE) handles the fundamental ZSL tasks. The short-term and long-term memory help to overcome the condition of the task boundary in the CZSL framework. Further, the proposed Tf-GCZSL method combines the concept of experience replay with dark knowledge distillation and regularization to overcome the catastrophic forgetting issues in a continual learning framework. Finally, the Tf-GCZSL uses a fully connected classifier developed using the synthetic features generated at the latent space of the VAE. The performance of the proposed Tf-GCZSL is evaluated in the existing task-agnostic prediction setting and the proposed task-free setting for the generalized CZSL over the five ZSL benchmark datasets. The results clearly indicate that the proposed Tf-GCZSL improves the prediction at least by 12%, 1%, 3%, 4%, and 3% over existing state-of-the-art and baseline methods for CUB, aPY, AWA1, AWA2, and SUN datasets, respectively in both settings (task-agnostic prediction and task-free learning). The source code is available at https://github.com/Chandan-IITI/Tf-GCZSL.

Keywords: Continual learning; Continual zero-shot learning; Experience replay; VAE; Zero-shot learning.

MeSH terms

  • Learning*
  • Machine Learning*
  • Memory, Long-Term