Pick-and-Place Transform Learning for Fast Multi-View Clustering

IEEE Trans Image Process. 2024:33:1272-1284. doi: 10.1109/TIP.2024.3357257. Epub 2024 Feb 19.

Abstract

To manipulate large-scale data, anchor-based multi-view clustering methods have grown in popularity owing to their linear complexity in terms of the number of samples. However, these existing approaches pay less attention to two aspects. 1) They target at learning a shared affinity matrix by using the local information from every single view, yet ignoring the global information from all views, which may weaken the ability to capture complementary information. 2) They do not consider the removal of feature redundancy, which may affect the ability to depict the real sample relationships. To this end, we propose a novel fast multi-view clustering method via pick-and-place transform learning named PPTL, which could capture insightful global features to characterize the sample relationships quickly. Specifically, PPTL first concatenates all the views along the feature direction to produce a global matrix. Considering the redundancy of the global matrix, we design a pick-and-place transform with l2,p -norm regularization to abandon the poor features and consequently construct a compact global representation matrix. Thus, by conducting anchor-based subspace clustering on the compact global representation matrix, PPTL can learn a consensus skinny affinity matrix with a discriminative clustering structure. Numerous experiments performed on small-scale to large-scale datasets demonstrate that our method is not only faster but also achieves superior clustering performance over state-of-the-art methods across a majority of the datasets.