Sequential multi-view subspace clustering

Neural Netw. 2022 Nov:155:475-486. doi: 10.1016/j.neunet.2022.09.007. Epub 2022 Sep 12.

Abstract

Self-representation based subspace learning has shown its effectiveness in many applications, but most existing methods do not consider the difference between different views. As a result, the learned self-representation matrix cannot well characterize the clustering structure. Moreover, some methods involve an undesired weighted vector of the tensor nuclear norm, which reduces the flexibility of the algorithm in practical applications. To handle these problems, we present a tensorized multi-view subspace clustering. Specifically, our method employs matrix factorization and decomposes the self-representation matrix to orthogonal projection matrix and affinity matrix. We also add ℓ1,2-norm regularization on affinity representation to characterize the cluster structure. Moreover, the proposed method uses weighted tensor Schatten p-norm to explore higher-order structure and complementary information embedded in multi-view data, which can allocate the ideal weight for each view automatically without additional weight and penalty parameters. We apply the adaptive loss function to the model to maintain the robustness to outliers and efficiently learn the data distribution. Extensive experimental results on different datasets reveal that our method is superior to other state-of-the-art multi-view subspace clustering methods.

Keywords: Adaptive loss; Matrix factorization; Multi-view subspace clustering; Weighted tensor schatten p-norm.

MeSH terms

  • Algorithms*
  • Cluster Analysis
  • Learning*