A brain-inspired approach for SAR-to-optical image translation based on diffusion models

Front Neurosci. 2024 Jan 30:18:1352841. doi: 10.3389/fnins.2024.1352841. eCollection 2024.

Abstract

Synthetic Aperture Radar (SAR) plays a crucial role in all-weather and all-day Earth observation owing to its distinctive imaging mechanism. However, interpreting SAR images is not as intuitive as optical images. Therefore, to make SAR images consistent with human cognitive habits and assist inexperienced people in interpreting SAR images, a generative model is needed to realize the translation from SAR images to optical ones. In this work, inspired by the processing of the human brain in painting, a novel conditional image-to-image translation framework is proposed for SAR to optical image translation based on the diffusion model. Firstly, considering the limited performance of existing CNN-based feature extraction modules, the model draws insights from the self-attention and long-skip connection mechanisms to enhance feature extraction capabilities, which are aligned more closely with the memory paradigm observed in the functioning of human brain neurons. Secondly, addressing the scarcity of SAR-optical image pairs, data augmentation that does not leak the augmented mode into the generated mode is designed to optimize data efficiency. The proposed SAR-to-optical image translation method is thoroughly evaluated using the SAR2Opt dataset. Experimental results demonstrate its capacity to synthesize high-fidelity optical images without introducing blurriness.

Keywords: SAR-to-optical image translation; brain-inspired approach; cognitive processes; diffusion model; synthetic aperture radar.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported in National Natural Science Foundation of China (62101041) and CAST Foundation (YGB-1-2023-0215).