DETECTION AND LOCALIZATION OF RETINAL BREAKS IN ULTRAWIDEFIELD FUNDUS PHOTOGRAPHY USING a YOLO v3 ARCHITECTURE-BASED DEEP LEARNING MODEL

Retina. 2022 Oct 1;42(10):1889-1896. doi: 10.1097/IAE.0000000000003550.

Abstract

Purpose: We aimed to develop a deep learning model for detecting and localizing retinal breaks in ultrawidefield fundus (UWF) images.

Methods: We retrospectively enrolled treatment-naive patients diagnosed with retinal break or rhegmatogenous retinal detachment and who had UWF images. The YOLO v3 architecture backbone was used to develop the model, using transfer learning. The performance of the model was evaluated using per-image classification and per-object detection.

Results: Overall, 4,505 UWF images from 940 patients were used in the current study. Among them, 306 UWF images from 84 patients were included in the test set. In per-object detection, the average precision for the object detection model considering every retinal break was 0.840. With the best threshold, the overall precision, recall, and F1 score were 0.6800, 0.9189, and 0.7816, respectively. In the per-image classification, the model showed an area under the receiver operating characteristic curve of 0.957 within the test set. The overall accuracy, sensitivity, and specificity in the test data set were 0.9085, 0.8966, and 0.9158, respectively.

Conclusion: The UWF image-based deep learning model evaluated in the current study performed well in diagnosing and locating retinal breaks.

MeSH terms

  • Deep Learning*
  • Eye Diseases*
  • Fundus Oculi
  • Humans
  • Photography / methods
  • Retinal Perforations* / diagnosis
  • Retrospective Studies
  • Sensitivity and Specificity