BI-RADS-Based Classification of Mammographic Soft Tissue Opacities Using a Deep Convolutional Neural Network

Diagnostics (Basel). 2022 Jun 28;12(7):1564. doi: 10.3390/diagnostics12071564.

Abstract

The aim of this study was to investigate the potential of a machine learning algorithm to classify breast cancer solely by the presence of soft tissue opacities in mammograms, independent of other morphological features, using a deep convolutional neural network (dCNN). Soft tissue opacities were classified based on their radiological appearance using the ACR BI-RADS atlas. We included 1744 mammograms from 438 patients to create 7242 icons by manual labeling. The icons were sorted into three categories: "no opacities" (BI-RADS 1), "probably benign opacities" (BI-RADS 2/3) and "suspicious opacities" (BI-RADS 4/5). A dCNN was trained (70% of data), validated (20%) and finally tested (10%). A sliding window approach was applied to create colored probability maps for visual impression. Diagnostic performance of the dCNN was compared to human readout by experienced radiologists on a "real-world" dataset. The accuracies of the models on the test dataset ranged between 73.8% and 89.8%. Compared to human readout, our dCNN achieved a higher specificity (100%, 95% CI: 85.4-100%; reader 1: 86.2%, 95% CI: 67.4-95.5%; reader 2: 79.3%, 95% CI: 59.7-91.3%), and the sensitivity (84.0%, 95% CI: 63.9-95.5%) was lower than that of human readers (reader 1:88.0%, 95% CI: 67.4-95.4%; reader 2:88.0%, 95% CI: 67.7-96.8%). In conclusion, a dCNN can be used for the automatic detection as well as the standardized and observer-independent classification of soft tissue opacities in mammograms independent of the presence of microcalcifications. Human decision making in accordance with the BI-RADS classification can be mimicked by artificial intelligence.

Keywords: artificial intelligence; breast neoplasms; computer; machine learning; mammography; neural networks.