MHF-Net: An Interpretable Deep Network for Multispectral and Hyperspectral Image Fusion

IEEE Trans Pattern Anal Mach Intell. 2022 Mar;44(3):1457-1473. doi: 10.1109/TPAMI.2020.3015691. Epub 2022 Feb 3.

Abstract

Multispectral and hyperspectral image fusion (MS/HS fusion) aims to fuse a high-resolution multispectral (HrMS) and a low-resolution hyperspectral (LrHS) images to generate a high-resolution hyperspectral (HrHS) image, which has become one of the most commonly addressed problems for hyperspectral image processing. In this paper, we specifically designed a network architecture for the MS/HS fusion task, called MHF-net, which not only contains clear interpretability, but also reasonably embeds the well studied linear mapping that links the HrHS image to HrMS and LrHS images. In particular, we first construct an MS/HS fusion model which merges the generalization models of low-resolution images and the low-rankness prior knowledge of HrHS image into a concise formulation, and then we build the proposed network by unfolding the proximal gradient algorithm for solving the proposed model. As a result of the careful design for the model and algorithm, all the fundamental modules in MHF-net have clear physical meanings and are thus easily interpretable. This not only greatly facilitates an easy intuitive observation and analysis on what happens inside the network, but also leads to its good generalization capability. Based on the architecture of MHF-net, we further design two deep learning regimes for two general cases in practice: consistent MHF-net and blind MHF-net. The former is suitable in the case that spectral and spatial responses of training and testing data are consistent, just as considered in most of the pervious general supervised MS/HS fusion researches. The latter ensures a good generalization in mismatch cases of spectral and spatial responses in training and testing data, and even across different sensors, which is generally considered to be a challenging issue for general supervised MS/HS fusion methods. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.