A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets

Med Phys. 2017 Oct;44(10):5162-5171. doi: 10.1002/mp.12453. Epub 2017 Aug 12.

Abstract

Background: Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.

Aims: We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.

Materials & methods: We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).

Results: From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).

Discussion/conclusion: We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.

Keywords: breast cancer; deep learning; feature extraction.

MeSH terms

  • Breast Neoplasms / diagnostic imaging*
  • Diagnosis, Computer-Assisted / methods*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer*
  • ROC Curve
  • Retrospective Studies