Graph-in-Graph Convolutional Network for Hyperspectral Image Classification

IEEE Trans Neural Netw Learn Syst. 2022 Jun 20:PP. doi: 10.1109/TNNLS.2022.3182715. Online ahead of print.

Abstract

With the development of hyperspectral sensors, accessible hyperspectral images (HSIs) are increasing, and pixel-oriented classification has attracted much attention. Recently, graph convolutional networks (GCNs) have been proposed to process graph-structured data in non-Euclidean domains and have been employed in HSI classification. But most methods based on GCN are hard to sufficiently exploit information of ground objects due to feature aggregation. To solve this issue, in this article, we proposed a graph-in-graph (GiG) model and a related GiG convolutional network (GiGCN) for HSI classification from a superpixel viewpoint. The GiG representation covers information inside and outside superpixels, respectively, corresponding to the local and global characteristics of ground objects. Concretely, after segmenting HSI into disjoint superpixels, each one is converted to an internal graph. Meanwhile, an external graph is constructed according to the spatial adjacent relationships among superpixels. Significantly, each node in the external graph embeds a corresponding internal graph, forming the so-called GiG structure. Then, GiGCN composed of internal and External graph convolution (EGC) is designed to extract hierarchical features and integrate them into multiple scales, improving the discriminability of GiGCN. Ensemble learning is incorporated to further boost the robustness of GiGCN. It is worth noting that we are the first to propose the GiG framework from the superpixel point and the GiGCN scheme for HSI classification. Experiment results on four benchmark datasets demonstrate that our proposed method is effective and feasible for HSI classification with limited labeled samples. For study replication, the code developed for this study is available at https://github.com/ShuGuoJ/GiGCN.git.