Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN

IEEE Trans Cybern. 2020 Jan;50(1):100-111. doi: 10.1109/TCYB.2018.2864670. Epub 2018 Sep 18.

Abstract

Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better classification performance. In this paper, an unsupervised feature extraction framework, named as patch-to-patch convolutional neural network (PToP CNN), is proposed for collaborative classification of hyperspectral and LiDAR data. More specific, a three-tower PToP mapping is first developed to seek an accurate representation from HSI to LiDAR data, aiming at merging multiscale features between two different sources. Then, by integrating hidden layers of the designed PToP CNN, extracted features are expected to possess deeply fused characteristics. Accordingly, features from different hidden layers are concatenated into a stacked vector and fed into three fully connected layers. To verify the effectiveness of the proposed classification framework, experiments are executed on two benchmark remote sensing data sets. The experimental results demonstrate that the proposed method provides superior performance when compared with some state-of-the-art classifiers, such as two-branch CNN and context CNN.