Automatic Segmentation and Quantitative Assessment of Stroke Lesions on MR Images

Diagnostics (Basel). 2022 Aug 24;12(9):2055. doi: 10.3390/diagnostics12092055.

Abstract

Lesion studies are crucial in establishing brain-behavior relationships, and accurately segmenting the lesion represents the first step in achieving this. Manual lesion segmentation is the gold standard for chronic strokes. However, it is labor-intensive, subject to bias, and limits sample size. Therefore, our objective is to develop an automatic segmentation algorithm for chronic stroke lesions on T1-weighted MR images. Methods: To train our model, we utilized an open-source dataset: ATLAS v2.0 (Anatomical Tracings of Lesions After Stroke). We partitioned the dataset of 655 T1 images with manual segmentation labels into five subsets and performed a 5-fold cross-validation to avoid overfitting of the model. We used a deep neural network (DNN) architecture for model training. Results: To evaluate the model performance, we used three metrics that pertain to diverse aspects of volumetric segmentation, including shape, location, and size. The Dice similarity coefficient (DSC) compares the spatial overlap between manual and machine segmentation. The average DSC was 0.65 (0.61−0.67; 95% bootstrapped CI). Average symmetric surface distance (ASSD) measures contour distances between the two segmentations. ASSD between manual and automatic segmentation was 12 mm. Finally, we compared the total lesion volumes and the Pearson correlation coefficient (ρ) between the manual and automatically segmented lesion volumes, which was 0.97 (p-value < 0.001). Conclusions: We present the first automated segmentation model trained on a large multicentric dataset. This model will enable automated on-demand processing of MRI scans and quantitative chronic stroke lesion assessment.

Keywords: 3D-UNet; automatic segmentation; deep neural networks; lesion studies; precision medicine; stroke; vascular neurology.

Grants and funding

This research received no external funding.