A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound

Comput Biol Med. 2020 Nov:126:104036. doi: 10.1016/j.compbiomed.2020.104036. Epub 2020 Oct 8.

Abstract

The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNet18, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s>0.90 and IoU>0.81. In the case of U-Net, the segmentation performance is F1s=0.89 and IoU=0.80, whereas FCN-AlexNet attains the lowest results with F1s=0.84 and IoU=0.73. In particular, ResNet18 obtains F1s=0.905 and IoU=0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNet18 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.

Keywords: Breast tumors; Breast ultrasound; Convolutional neural networks; Semantic segmentation; Transfer learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Breast Neoplasms* / diagnostic imaging
  • Female
  • Humans
  • Image Processing, Computer-Assisted
  • Neural Networks, Computer
  • Reproducibility of Results
  • Semantics*