Discriminative fusion of moments-aligned latent representation of multimodality medical data

Phys Med Biol. 2023 Dec 26;69(1). doi: 10.1088/1361-6560/ad1271.

Abstract

Fusion of multimodal medical data provides multifaceted, disease-relevant information for diagnosis or prognosis prediction modeling. Traditional fusion strategies such as feature concatenation often fail to learn hidden complementary and discriminative manifestations from high-dimensional multimodal data. To this end, we proposed a methodology for the integration of multimodality medical data by matching their moments in a latent space, where the hidden, shared information of multimodal data is gradually learned by optimization with multiple feature collinearity and correlation constrains. We first obtained the multimodal hidden representations by learning mappings between the original domain and shared latent space. Within this shared space, we utilized several relational regularizations, including data attribute preservation, feature collinearity and feature-task correlation, to encourage learning of the underlying associations inherent in multimodal data. The fused multimodal latent features were finally fed to a logistic regression classifier for diagnostic prediction. Extensive evaluations on three independent clinical datasets have demonstrated the effectiveness of the proposed method in fusing multimodal data for medical prediction modeling.

Keywords: diagnosis and prognosis modeling; feature collinearity; latent representation learning; moment matching; multimodal medical data fusion.

MeSH terms

  • Machine Learning*
  • Medical Informatics*