Subspace Clustering via Good Neighbors

IEEE Trans Pattern Anal Mach Intell. 2020 Jun;42(6):1537-1544. doi: 10.1109/TPAMI.2019.2913863. Epub 2019 Apr 30.

Abstract

Finding the informative subspaces of high-dimensional datasets is at the core of numerous applications in computer vision, where spectral-based subspace clustering is arguably the most widely studied method due to its strong empirical performance. Such algorithms first compute an affinity matrix to construct a self-representation for each sample using other samples as a dictionary. Sparsity and connectivity of the self-representation play important roles in effective subspace clustering. However, simultaneous optimization of both factors is difficult due to their conflicting nature, and most existing methods are designed to address only one factor. In this paper, we propose a post-processing technique to optimize both sparsity and connectivity by finding good neighbors. Good neighbors induce key connections among samples within a subspace and not only have large affinity coefficients but are also strongly connected to each other. We reassign the coefficients of the good neighbors and eliminate other entries to generate a new coefficient matrix. We show that the few good neighbors can effectively recover the subspace, and the proposed post-processing step of finding good neighbors is complementary to most existing subspace clustering algorithms. Experiments on five benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods with negligible additional computation cost.