MOMA: a multi-task attention learning algorithm for multi-omics data interpretation and classification

Bioinformatics. 2022 Apr 12;38(8):2287-2296. doi: 10.1093/bioinformatics/btac080.

Abstract

Motivation: Accurate diagnostic classification and biological interpretation are important in biology and medicine, which are data-rich sciences. Thus, integration of different data types is necessary for the high predictive accuracy of clinical phenotypes, and more comprehensive analyses for predicting the prognosis of complex diseases are required.

Results: Here, we propose a novel multi-task attention learning algorithm for multi-omics data, termed MOMA, which captures important biological processes for high diagnostic performance and interpretability. MOMA vectorizes features and modules using a geometric approach and focuses on important modules in multi-omics data via an attention mechanism. Experiments using public data on Alzheimer's disease and cancer with various classification tasks demonstrated the superior performance of this approach. The utility of MOMA was also verified using a comparison experiment with an attention mechanism that was turned on or off and biological analysis.

Availability and implementation: The source codes are available at https://github.com/dmcb-gist/MOMA.

Supplementary information: Supplementary materials are available at Bioinformatics online.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Humans
  • Multiomics*
  • Neoplasms* / diagnosis
  • Neoplasms* / genetics
  • Phenotype
  • Software