Deep learning radiopathomics based on preoperative US images and biopsy whole slide images can distinguish between luminal and non-luminal tumors in early-stage breast cancers

EBioMedicine. 2023 Aug:94:104706. doi: 10.1016/j.ebiom.2023.104706. Epub 2023 Jul 19.

Abstract

Background: For patients with early-stage breast cancers, neoadjuvant treatment is recommended for non-luminal tumors instead of luminal tumors. Preoperative distinguish between luminal and non-luminal cancers at early stages will facilitate treatment decisions making. However, the molecular immunohistochemical subtypes based on biopsy specimens are not always consistent with final results based on surgical specimens due to the high intra-tumoral heterogeneity. Given that, we aimed to develop and validate a deep learning radiopathomics (DLRP) model to preoperatively distinguish between luminal and non-luminal breast cancers at early stages based on preoperative ultrasound (US) images, and hematoxylin and eosin (H&E)-stained biopsy slides.

Methods: This multicentre study included three cohorts from a prospective study conducted by our team and registered on the Chinese Clinical Trial Registry (ChiCTR1900027497). Between January 2019 and August 2021, 1809 US images and 603 H&E-stained whole slide images (WSIs) from 603 patients with early-stage breast cancers were obtained. A Resnet18 model pre-trained on ImageNet and a multi-instance learning based attention model were used to extract the features of US and WSIs, respectively. An US-guided Co-Attention module (UCA) was designed for feature fusion of US and WSIs. The DLRP model was constructed based on these three feature sets including deep learning US feature, deep learning WSIs feature and UCA-fused feature from a training cohort (1467 US images and 489 WSIs from 489 patients). The DLRP model's diagnostic performance was validated in an internal validation cohort (342 US images and 114 WSIs from 114 patients) and an external test cohort (270 US images and 90 WSIs from 90 patients). We also compared diagnostic efficacy of the DLRP model with that of deep learning radiomics model and deep learning pathomics model in the external test cohort.

Findings: The DLRP yielded high performance with area under the curve (AUC) values of 0.929 (95% CI 0.865-0.968) in the internal validation cohort, and 0.900 (95% CI 0.819-0.953) in the external test cohort. The DLRP also outperformed deep learning radiomics model based on US images only (AUC 0.815 [0.719-0.889], p = 0.027) and deep learning pathomics model based on WSIs only (AUC 0.802 [0.704-0.878], p = 0.013) in the external test cohort.

Interpretation: The DLRP can effectively distinguish between luminal and non-luminal breast cancers at early stages before surgery based on pretherapeutic US images and biopsy H&E-stained WSIs, providing a tool to facilitate treatment decision making in early-stage breast cancers.

Funding: Natural Science Foundation of Guangdong Province (No. 2023A1515011564), and National Natural Science Foundation of China (No. 91959127; No. 81971631).

Keywords: Breast cancer; Deep learning; Ultrasound; Whole slide imaging.

Publication types

  • Multicenter Study

MeSH terms

  • Biopsy
  • Breast Neoplasms* / diagnostic imaging
  • Deep Learning*
  • Female
  • Humans
  • Prospective Studies
  • Ultrasonography

Substances

  • CI 953