KDE-GAN: A multimodal medical image-fusion model based on knowledge distillation and explainable AI modules

Comput Biol Med. 2022 Dec;151(Pt A):106273. doi: 10.1016/j.compbiomed.2022.106273. Epub 2022 Nov 3.

Abstract

Background: As medical images contain sensitive patient information, finding a publicly accessible dataset with patient permission is challenging. Furthermore, few large-scale datasets suitable for training image-fusion models are available. To address this issue, we propose a medical image-fusion model based on knowledge distillation (KD) and an explainable AI module-based generative adversarial network with dual discriminators (KDE-GAN).

Method: KD reduces the size of the datasets required for training by refining a complex image-fusion model into a simple model with the same feature-extraction capabilities as the complex model. The images generated by the explainable AI module show whether the discriminator can distinguish true images from false images. When the discriminator precisely judges the image based on the key features, the training can be stopped early, reducing overfitting and the amount of data required for training.

Results: By training using only small-scale datasets, the trained KDE-GAN can generate clear fused images. KDE-GAN fusion results were evaluated quantitatively using five metrics: spatial frequency, structural similarity, edge information transfer factor, normalized mutual information, and nonlinear correlation information entropy.

Conclusion: Experimental results show that the fused images generated by KDE-GAN are superior to state-of-the-art methods, both subjectively and objectively.

Keywords: Explainable AI module; Generative adversarial network with dual discriminators; Knowledge distillation; Multimodal medical-image fusion.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Benchmarking*
  • Entropy
  • Humans
  • Image Processing, Computer-Assisted