Attention-VGG16-UNet: a novel deep learning approach for automatic segmentation of the median nerve in ultrasound images

Quant Imaging Med Surg. 2022 Jun;12(6):3138-3150. doi: 10.21037/qims-21-1074.

Abstract

Background: Ultrasonography-an imaging technique that can show the anatomical section of nerves and surrounding tissues-is one of the most effective imaging methods to diagnose nerve diseases. However, segmenting the median nerve in two-dimensional (2D) ultrasound images is challenging due to the tiny and inconspicuous size of the nerve, the low contrast of images, and imaging noise. This study aimed to apply deep learning approaches to improve the accuracy of automatic segmentation of the median nerve in ultrasound images.

Methods: In this study, we proposed an improved network called VGG16-UNet, which incorporates a contracting path and an expanding path. The contracting path is the VGG16 model with the 3 fully connected layers removed. The architecture of the expanding path resembles the upsampling path of U-Net. Moreover, attention mechanisms or/and residual modules were added to the U-Net and VGG16-UNet, which sequentially obtained Attention-UNet (A-UNet), Summation-UNet (S-UNet), Attention-Summation-UNet (AS-UNet), Attention-VGG16-UNet (A-VGG16-UNet), Summation-VGG16-UNet (S-VGG16-UNet), and Attention-Summation-VGG16-UNet (AS-VGG16-UNet). Each model was trained on the dataset of 910 median nerve images from 19 participants and tested on 207 frames from a new image sequence. The performance of the models was evaluated by metrics including Dice similarity coefficient (Dice), Jaccard similarity coefficient (Jaccard), Precision, and Recall. Based on the best segmentation results, we reconstructed a 3D median nerve image using the volume rendering method in the Visualization Toolkit (VTK) to assist in clinical nerve diagnosis.

Results: The results of paired t-tests showed significant differences (P<0.01) in the metrics' values of different models. It showed that AS-UNet ranked first in U-Net models. The VGG16-UNet and its variants performed better than the corresponding U-Net models. Furthermore, the model's performance with the attention mechanism was superior to that with the residual module either based on U-Net or VGG16-UNet. The A-VGG16-UNet achieved the best performance (Dice =0.904±0.035, Jaccard =0.826±0.057, Precision =0.905±0.061, and Recall =0.909±0.061). Finally, we applied the trained A-VGG16-UNet to segment the median nerve in the image sequence, then reconstructed and visualized the 3D image of the median nerve.

Conclusions: This study demonstrates that the attention mechanism and residual module improve deep learning models for segmenting ultrasound images. The proposed VGG16-UNet-based models performed better than U-Net-based models. With segmentation, a 3D median nerve image can be reconstructed and can provide a visual reference for nerve diagnosis.

Keywords: Deep learning; attention mechanism; automatic ultrasound image segmentation; median nerve; residual module.