Multiview Orthonormalized Partial Least Squares: Regularizations and Deep Extensions

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4371-4385. doi: 10.1109/TNNLS.2021.3116784. Epub 2023 Aug 4.

Abstract

In this article, we establish a family of subspace-based learning methods for multiview learning using least squares as the fundamental basis. Specifically, we propose a novel unified multiview learning framework called multiview orthonormalized partial least squares (MvOPLSs) to learn a classifier over a common latent space shared by all views. The regularization technique is further leveraged to unleash the power of the proposed framework by providing three types of regularizers on its basic ingredients, including model parameters, decision values, and latent projected points. With a set of regularizers derived from various priors, we not only recast most existing multiview learning methods into the proposed framework with properly chosen regularizers but also propose two novel models. To further improve the performance of the proposed framework, we propose to learn nonlinear transformations parameterized by deep networks. Extensive experiments are conducted on multiview datasets in terms of both feature extraction and cross-modal retrieval. Results show that the subspace-based learning for a common latent space is effective and its nonlinear extension can further boost performance, and more importantly, one of two proposed methods with nonlinear extension can achieve better results than all compared methods.