Feature Interaction Learning Network for Cross-Spectral Image Patch Matching

IEEE Trans Image Process. 2023:32:5564-5579. doi: 10.1109/TIP.2023.3313488. Epub 2023 Oct 10.

Abstract

Recently, feature relation learning has attracted extensive attention in cross-spectral image patch matching. However, most feature relation learning methods can only extract shallow feature relations and are accompanied by the loss of useful discriminative features or the introduction of disturbing features. Although the latest multi-branch feature difference learning network can relatively sufficiently extract useful discriminative features, the multi-branch network structure it adopts has a large number of parameters. Therefore, we propose a novel two-branch feature interaction learning network (FIL-Net). Specifically, a novel feature interaction learning idea for cross-spectral image patch matching is proposed, and a new feature interaction learning module is constructed, which can effectively mine common and private features between cross-spectral image patches, and extract richer and deeper feature relations with invariance and discriminability. At the same time, we re-explore the feature extraction network for the cross-spectral image patch matching task, and a new two-branch residual feature extraction network with stronger feature extraction capabilities is constructed. In addition, we propose a new multi-loss strong-constrained optimization strategy, which can facilitate reasonable network optimization and efficient extraction of invariant and discriminative features. Furthermore, a public VIS-LWIR patch dataset and a public SEN1-2 patch dataset are constructed. At the same time, the corresponding experimental benchmarks are established, which are convenient for future research while solving few existing cross-spectral image patch matching datasets. Extensive experiments show that the proposed FIL-Net achieves state-of-the-art performance in three different cross-spectral image patch matching scenarios.