Multi-Modality Fusion & Inductive Knowledge Transfer Underlying Non-Sparse Multi-Kernel Learning and Distribution Adaption

IEEE/ACM Trans Comput Biol Bioinform. 2023 Jul-Aug;20(4):2387-2397. doi: 10.1109/TCBB.2022.3142748. Epub 2023 Aug 9.

Abstract

With the development of sensors, more and more multimodal data are accumulated, especially in biomedical and bioinformatics fields. Therefore, multimodal data analysis becomes very important and urgent. In this study, we combine multi-kernel learning and transfer learning, and propose a feature-level multi-modality fusion model with insufficient training samples. To be specific, we firstly extend kernel Ridge regression to its multi-kernel version under the lp-norm constraint to explore complementary patterns contained in multimodal data. Then we use marginal probability distribution adaption to minimize the distribution differences between the source domain and the target domain to solve the problem of insufficient training samples. Based on epilepsy EEG data provided by the University of Bonn, we construct 12 multi-modality & transfer scenarios to evaluate our model. Experimental results show that compared with baselines, our model performs better on most scenarios.