A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics

Nat Biomed Eng. 2023 Jun;7(6):743-755. doi: 10.1038/s41551-023-01045-x. Epub 2023 Jun 12.

Abstract

During the diagnostic process, clinicians leverage multimodal information, such as the chief complaint, medical images and laboratory test results. Deep-learning models for aiding diagnosis have yet to meet this requirement of leveraging multimodal information. Here we report a transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model leverages embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and uses bidirectional blocks with intramodal and intermodal attention to learn holistic representations of radiographs, the unstructured chief complaint and clinical history, and structured clinical information such as laboratory test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary disease (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Unified multimodal transformer-based models may help streamline the triaging of patients and facilitate the clinical decision-making process.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • COVID-19 Testing
  • COVID-19* / diagnosis
  • Electric Power Supplies
  • Humans