Contrastive Learning-Based Dual Dynamic GCN for SAR Image Scene Classification

IEEE Trans Neural Netw Learn Syst. 2022 May 20:PP. doi: 10.1109/TNNLS.2022.3174873. Online ahead of print.

Abstract

As a typical label-limited task, it is significant and valuable to explore networks that enable to utilize labeled and unlabeled samples simultaneously for synthetic aperture radar (SAR) image scene classification. Graph convolutional network (GCN) is a powerful semisupervised learning paradigm that helps to capture the topological relationships of scenes in SAR images. While the performance is not satisfactory when existing GCNs are directly used for SAR image scene classification with limited labels, because few methods to characterize the nodes and edges for SAR images. To tackle these issues, we propose a contrastive learning-based dual dynamic GCN (DDGCN) for SAR image scene classification. Specifically, we design a novel contrastive loss to capture the structures of views and scenes, and develop a clustering-based contrastive self-supervised learning model for mapping SAR images from pixel space to high-level embedding space, which facilitates the subsequent node representation and message passing in GCNs. Afterward, we propose a multiple features and parameter sharing dual network framework called DDGCN. One network is a dynamic GCN to keep the local consistency and nonlocal dependency of the same scene with the help of a node attention module and a dynamic correlation matrix learning algorithm. The other is a multiscale and multidirectional fully connected network (FCN) to enlarge the discrepancies between different scenes. Finally, the features obtained by the two branches are fused for classification. A series of experiments on synthetic and real SAR images demonstrate that the proposed method achieves consistently better classification performance than the existing methods.