Interactive segmentation of medical images using deep learning

Phys Med Biol. 2024 Feb 5;69(4). doi: 10.1088/1361-6560/ad1cf8.

Abstract

Medical image segmentation algorithms based on deep learning have achieved good segmentation results in recent years, but they require a large amount of labeled data. When performing pixel-level labeling on medical images, labeling a target requires marking ten or even hundreds of points along its edge, which requires a lot of time and labor costs. To reduce the labeling cost, we utilize a click-based interactive segmentation method to generate high-quality segmentation labels. However, in current interactive segmentation algorithms, only the interaction information clicked by the user and the image features are fused as the input of the backbone network (so-called early fusion). The early fusion method has the problem that the interactive information is much sparse at this time. Furthermore, the interactive segmentation algorithms do not take into account the boundary problem, resulting in poor model performance. So we propose early fusion and late fusion strategy to prevent the interaction information from being diluted prematurely and make better use of the interaction information. At the same time, we propose a decoupled head structure, by extracting the image boundary information, and combining the boundary loss function to establish the boundary constraint term, so that the network can pay more attention to the boundary information and further improve the performance of the network. Finally, we conduct experiments on three medical datasets (Chaos, VerSe and Uterine Myoma MRI) to verify the effectiveness of our network. The experimental results show that our network greatly improved compared with the baseline, and NoC@80(the number of interactive clicks over 80% of the IoU threshold) improved by 0.1, 0.1, and 0.2. In particular, we have achieved a NoC@80 score of 1.69 on Chaos. According to statistics, manual annotation takes 25 min to label a case(Uterine Myoma MRI). Annotating a medical image with our method can be done in only 2 or 3 clicks, which can save more than 50% of the cost.

Keywords: deep learning; interactive segmentation; medical images.

MeSH terms

  • Algorithms
  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Myoma*
  • Time