HSGAN: Hyperspectral Reconstruction From RGB Images With Generative Adversarial Network

IEEE Trans Neural Netw Learn Syst. 2023 Aug 10:PP. doi: 10.1109/TNNLS.2023.3300099. Online ahead of print.

Abstract

Hyperspectral (HS) reconstruction from RGB images denotes the recovery of whole-scene HS information, which has attracted much attention recently. State-of-the-art approaches often adopt convolutional neural networks to learn the mapping for HS reconstruction from RGB images. However, they often do not achieve high HS reconstruction performance across different scenes consistently. In addition, their performance in recovering HS images from clean and real-world noisy RGB images is not consistent. To improve the HS reconstruction accuracy and robustness across different scenes and from different input images, we present an effective HSGAN framework with a two-stage adversarial training strategy. The generator is a four-level top-down architecture that extracts and combines features on multiple scales. To generalize well to real-world noisy images, we further propose a spatial-spectral attention block (SSAB) to learn both spatial-wise and channel-wise relations. We conduct the HS reconstruction experiments from both clean and real-world noisy RGB images on five well-known HS datasets. The results demonstrate that HSGAN achieves superior performance to existing methods. Please visit https://github.com/zhaoyuzhi/HSGAN to try our codes.