Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification

ISPRS J Photogramm Remote Sens. 2019 Jan:147:193-205. doi: 10.1016/j.isprsjprs.2018.10.006.

Abstract

In this paper, we aim at tackling a general but interesting cross-modality feature learning question in remote sensing community-can a limited amount of highly-discriminative (e.g., hyperspectral) training data improve the performance of a classification task using a large amount of poorly-discriminative (e.g., multispectral) data? Traditional semi-supervised manifold alignment methods do not perform sufficiently well for such problems, since the hyperspectral data is very expensive to be largely collected in a trade-off between time and efficiency, compared to the multispectral data. To this end, we propose a novel semi-supervised cross-modality learning framework, called learnable manifold alignment (LeMA). LeMA learns a joint graph structure directly from the data instead of using a given fixed graph defined by a Gaussian kernel function. With the learned graph, we can further capture the data distribution by graph-based label propagation, which enables finding a more accurate decision boundary. Additionally, an optimization strategy based on the alternating direction method of multipliers (ADMM) is designed to solve the proposed model. Extensive experiments on two hyperspectral-multispectral datasets demonstrate the superiority and effectiveness of the proposed method in comparison with several state-of-the-art methods.

Keywords: Cross-modality; Graph learning; Hyperspectral; Manifold alignment; Multispectral; Remote sensing; Semi-supervised learning.