Accurate detection and grading of pterygium through smartphone by a fusion training model

Br J Ophthalmol. 2024 Feb 21;108(3):336-342. doi: 10.1136/bjo-2022-322552.

Abstract

Background/aims: To improve the accuracy of pterygium screening and detection through smartphones, we established a fusion training model by blending a large number of slit-lamp image data with a small proportion of smartphone data.

Method: Two datasets were used, a slit-lamp image dataset containing 20 987 images and a smartphone-based image dataset containing 1094 images. The RFRC (Faster RCNN based on ResNet101) model for the detection model. The SRU-Net (U-Net based on SE-ResNeXt50) for the segmentation models. The open-cv algorithm measured the width, length and area of pterygium in the cornea.

Results: The detection model (trained by slit-lamp images) obtained the mean accuracy of 95.24%. The fusion segmentation model (trained by smartphone and slit-lamp images) achieved a microaverage F1 score of 0.8981, sensitivity of 0.8709, specificity of 0.9668 and area under the curve (AUC) of 0.9295. Compared with the same group of patients' smartphone and slit-lamp images, the fusion model performance in smartphone-based images (F1 score of 0.9313, sensitivity of 0.9360, specificity of 0.9613, AUC of 0.9426, accuracy of 92.38%) is close to the model (trained by slit-lamp images) in slit-lamp images (F1 score of 0.9448, sensitivity of 0.9165, specificity of 0.9689, AUC of 0.9569 and accuracy of 94.29%).

Conclusion: Our fusion model method got high pterygium detection and grading accuracy in insufficient smartphone data, and its performance is comparable to experienced ophthalmologists and works well in different smartphone brands.

Keywords: conjunctiva; imaging; ocular surface; telemedicine.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Conjunctiva / abnormalities*
  • Cornea
  • Humans
  • Pterygium* / diagnosis
  • Slit Lamp
  • Smartphone*

Supplementary concepts

  • Pterygium Of Conjunctiva And Cornea