PatchMix Augmentation to Identify Causal Features in Few-Shot Learning

IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7639-7653. doi: 10.1109/TPAMI.2022.3223784. Epub 2023 May 5.

Abstract

The task of Few-shot learning (FSL) aims to transfer the knowledge learned from base categories with sufficient labelled data to novel categories with scarce known information. It is currently an important research question and has great practical values in the real-world applications. Despite extensive previous efforts are made on few-shot learning tasks, we emphasize that most existing methods did not take into account the distributional shift caused by sample selection bias in the FSL scenario. Such a selection bias can induce spurious correlation between the semantic causal features, that are causally and semantically related to the class label, and the other non-causal features. Critically, the former ones should be invariant across changes in distributions, highly related to the classes of interest, and thus well generalizable to novel classes, while the latter ones are not stable to changes in the distribution. To resolve this problem, we propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency by replacing the patch-level information and supervision of the query images with random gallery images from different classes from the query ones. We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features. To further make these features to be discriminative enough for classification, we propose Correlation-guided Reconstruction (CGR) and Hardness-Aware module for instance discrimination and easier discrimination between similar classes. Moreover, such a framework can be adapted to the unsupervised FSL scenario. The utility of our method is demonstrated on the state-of-the-art results consistently achieved on several benchmarks including miniImageNet, tieredImageNet, CIFAR-FS, CUB, Cars, Places and Plantae, in all settings of single-domain, cross-domain and unsupervised FSL. By studying the intra-variance property of learned features and visualizing the learned features, we further quantitatively and qualitatively show that such a promising result is due to the effectiveness in learning causal features.