Feature extraction framework based on contrastive learning with adaptive positive and negative samples

Neural Netw. 2022 Dec:156:244-257. doi: 10.1016/j.neunet.2022.09.029. Epub 2022 Oct 3.

Abstract

Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. As a popular self-supervised learning method, contrastive learning has recently garnered considerable attention. In this study, we propose a unified feature extraction framework based on contrastive learning with adaptive positive and negative samples (CL-FEFA) that is suitable for unsupervised, supervised, and semi-supervised feature extraction. CL-FEFA constructs adaptively positive and negative samples from the result of feature extraction, which makes them more appropriate and accurate. Meanwhile, the discriminative features are extracted based on adaptive positive and negative samples, which will make the intra-class embedded samples more compact and the inter-class embedded samples more dispersed. In the process, using the potential structure information of subspace samples to dynamically construct positive and negative samples can make our framework more robust to noisy data. Furthermore, it is proven that CL-FEFA actually maximizes the mutual information of positive samples, which captures non-linear statistical dependencies between similar samples in potential structure space and thus can act as a measure of true dependence. This also provides theoretical support for its advantages in feature extraction. The final numerical experiments prove that the proposed framework has a strong advantage over traditional feature extraction methods and contrastive learning methods.

Keywords: Contrastive learning; Dimension reduction; Feature extraction; Mutual information.

MeSH terms

  • Noise*