A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism

Sensors (Basel). 2022 May 11;22(10):3651. doi: 10.3390/s22103651.

Abstract

Pixel-level image fusion is an effective way to fully exploit the rich texture information of visible images and the salient target characteristics of infrared images. With the development of deep learning technology in recent years, the image fusion algorithm based on this method has also achieved great success. However, owing to the lack of sufficient and reliable paired data and a nonexistent ideal fusion result as supervision, it is difficult to design a precise network training mode. Moreover, the manual fusion strategy has difficulty ensuring the full use of information, which easily causes redundancy and omittance. To solve the above problems, this paper proposes a multi-stage visible and infrared image fusion network based on an attention mechanism (MSFAM). Our method stabilizes the training process through multi-stage training and enhances features by the learning attention fusion block. To improve the network effect, we further design a Semantic Constraint module and Push-Pull loss function for the fusion task. Compared with several recently used methods, the qualitative comparison intuitively shows more beautiful and natural fusion results by our model with a stronger applicability. For quantitative experiments, MSFAM achieves the best results in three of the six frequently used metrics in fusion tasks, while other methods only obtain good scores on a single metric or a few metrics. Besides, a commonly used high-level semantic task, i.e., object detection, is used to prove its greater benefits for downstream tasks compared with singlelight images and fusion results by existing methods. All these experiments prove the superiority and effectiveness of our algorithm.

Keywords: attention mechanism; deep learning; image fusion.

MeSH terms

  • Algorithms*
  • Image Processing, Computer-Assisted* / methods

Grants and funding

This research received no external funding.