Multimodal medical image fusion algorithm based on pulse coupled neural networks and nonsubsampled contourlet transform

Med Biol Eng Comput. 2023 Jan;61(1):155-177. doi: 10.1007/s11517-022-02697-8. Epub 2022 Nov 7.

Abstract

Combining two medical images from different modalities is more helpful for using the resulting image in the healthcare field. Medical image fusion means combining two or more images coming from multiple sensors. This technology obtains an output image that presents more effective and useful information from two images. This paper proposes a multi-modal medical image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and pulse coupled neural networks (PCNN) methods. The input images are decomposed using the NSCT method into low- and high-frequency subbands. The PCNN is a fusion rule for integrating both low- and high-frequency subbands. The inverse of the NSCT method is to reconstruct the fused image. The results of medical image fusion help doctors with disease diagnosis and patient treatment. The proposed algorithm is tested on six groups of multi-modal medical images using 100 pairs of input images. The proposed algorithm is compared with eight fusion methods. We evaluate the performance of the proposed algorithm using the fusion metrics: peak signal to noise ratio (PSNR), mutual information (MI), entropy (EN), weighted edge information (Q[Formula: see text]), nonlinear correlation information entropy (Q[Formula: see text]), standard deviation (SD), and average gradient (AG). Experimental results show that the proposed algorithm can perform better than other medical image fusion methods and achieve promising results.

Keywords: Computed tomography; Magnetic resonance image; Medical image fusion; Nonsubsampled contourlet transform; Pulse coupled neural networks.

MeSH terms

  • Algorithms*
  • Benchmarking
  • Entropy
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Neural Networks, Computer*
  • Signal-To-Noise Ratio