StateNet: Deep State Learning for Robust Feature Matching of Remote Sensing Images

IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3284-3298. doi: 10.1109/TNNLS.2021.3120768. Epub 2023 Jul 6.

Abstract

Seeking good correspondences between two images is a fundamental and challenging problem in the remote sensing (RS) community, and it is a critical prerequisite in a wide range of feature-based visual tasks. In this article, we propose a flexible and general deep state learning network for both rigid and nonrigid feature matching, which provides a mechanism to change the state of matches into latent canonical forms, thereby weakening the degree of randomness in matching patterns. Different from the current conventional strategies (i.e., imposing a global geometric constraint or designing additional handcrafted descriptor), the proposed StateNet is designed to perform alternating two steps: 1) recalibrates matchwise feature responses in the spatial domain and 2) leverages the spatially local correlation across two sets of feature points for transformation update. For this purpose, our network contains two novel operations: adaptive dual-aggregation convolution (ADAConv) and point rendering layer (PRL). These two operations are differentiable, so our network can be inserted into the existing classification architecture to reduce the cost of establishing reliable correspondences. To demonstrate the robustness and universality of our approach, extensive experiments on various real image pairs for feature matching are conducted. Experiments reveal the superiority of our StateNet significantly over the state-of-the-art alternatives.

MeSH terms

  • Algorithms*
  • Image Processing, Computer-Assisted / methods
  • Neural Networks, Computer*
  • Remote Sensing Technology