Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks

Comput Methods Programs Biomed. 2018 Aug:162:221-231. doi: 10.1016/j.cmpb.2018.05.027. Epub 2018 May 19.

Abstract

Background and objective: Automatic segmentation of skin lesions in dermoscopy images is still a challenging task due to the large shape variations and indistinct boundaries of the lesions. Accurate segmentation of skin lesions is a key prerequisite step for any computer-aided diagnostic system to recognize skin melanoma.

Methods: In this paper, we propose a novel segmentation methodology via full resolution convolutional networks (FrCN). The proposed FrCN method directly learns the full resolution features of each individual pixel of the input data without the need for pre- or post-processing operations such as artifact removal, low contrast adjustment, or further enhancement of the segmented skin lesion boundaries. We evaluated the proposed method using two publicly available databases, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets. To evaluate the proposed method, we compared the segmentation performance with the latest deep learning segmentation approaches such as the fully convolutional network (FCN), U-Net, and SegNet.

Results: Our results showed that the proposed FrCN method segmented the skin lesions with an average Jaccard index of 77.11% and an overall segmentation accuracy of 94.03% for the ISBI 2017 test dataset and 84.79% and 95.08%, respectively, for the PH2 dataset. In comparison to FCN, U-Net, and SegNet, the proposed FrCN outperformed them by 4.94%, 15.47%, and 7.48% for the Jaccard index and 1.31%, 3.89%, and 2.27% for the segmentation accuracy, respectively. Furthermore, the proposed FrCN achieved a segmentation accuracy of 95.62% for some representative clinical benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases in the ISBI 2017 test dataset, exhibiting better performance than those of FCN, U-Net, and SegNet.

Conclusions: We conclude that using the full spatial resolutions of the input image could enable to learn better specific and prominent features, leading to an improvement in the segmentation performance.

Keywords: Deep learning; Dermoscopy; Full resolution convolutional network (FrCN); Melanoma; Skin lesion segmentation.

MeSH terms

  • Algorithms
  • Artifacts
  • Dermoscopy*
  • Diagnosis, Computer-Assisted
  • Humans
  • Image Processing, Computer-Assisted
  • Machine Learning
  • Melanoma / diagnostic imaging*
  • Melanoma, Cutaneous Malignant
  • Neural Networks, Computer
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Skin Diseases / diagnostic imaging*
  • Skin Neoplasms / diagnostic imaging*