Joint Embedding Learning and Low-Rank Approximation: A Framework for Incomplete Multiview Learning

IEEE Trans Cybern. 2021 Mar;51(3):1690-1703. doi: 10.1109/TCYB.2019.2953564. Epub 2021 Feb 17.

Abstract

In real-world applications, not all instances in the multiview data are fully represented. To deal with incomplete data, incomplete multiview learning (IML) rises. In this article, we propose the joint embedding learning and low-rank approximation (JELLA) framework for IML. The JELLA framework approximates the incomplete data by a set of low-rank matrices and learns a full and common embedding by linear transformation. Several existing IML methods can be unified as special cases of the framework. More interestingly, some linear transformation-based complete multiview methods can be adapted to IML directly with the guidance of the framework. Thus, the JELLA framework improves the efficiency of processing incomplete multiview data, and bridges the gap between complete multiview learning and IML. Moreover, the JELLA framework can provide guidance for developing new algorithms. For illustration, within the framework, we propose the IML with the block-diagonal representation (IML-BDR) method. Assuming that the sampled examples have an approximate linear subspace structure, IML-BDR uses the block-diagonal structure prior to learning the full embedding, which would lead to more correct clustering. A convergent alternating iterative algorithm with the successive over-relaxation optimization technique is devised for optimization. The experimental results on various datasets demonstrate the effectiveness of IML-BDR.