Micro-Supervised Disturbance Learning: A Perspective of Representation Probability Distribution

IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7542-7558. doi: 10.1109/TPAMI.2022.3225461. Epub 2023 May 5.

Abstract

The instability is shown in the existing methods of representation learning based on Euclidean distance under a broad set of conditions. Furthermore, the scarcity and high cost of labels prompt us to explore more expressive representation learning methods which depends on as few labels as possible. To address above issues, the small-perturbation ideology is firstly introduced on the representation learning model based on the representation probability distribution. The positive small-perturbation information (SPI) which only depend on two labels of each cluster is used to stimulate the representation probability distribution and then two variant models are proposed to fine-tune the expected representation distribution of Restricted Boltzmann Machine (RBM), namely, Micro-supervised Disturbance Gaussian-binary RBM (Micro-DGRBM) and Micro-supervised Disturbance RBM (Micro-DRBM) models. The Kullback-Leibler (KL) divergence of SPI is minimized in the same cluster to promote the representation probability distributions to become more similar in Contrastive Divergence (CD) learning. In contrast, the KL divergence of SPI is maximized in the different clusters to enforce the representation probability distributions to become more dissimilar in CD learning. To explore the representation learning capability under the continuous stimulation of the SPI, we present a deep Micro-supervised Disturbance Learning (Micro-DL) framework based on the Micro-DGRBM and Micro-DRBM models and compare it with a similar deep structure which has no external stimulation. Experimental results demonstrate that the proposed deep Micro-DL architecture shows better performance in comparison to the baseline method, the most related shallow models and deep frameworks for clustering.