Multi-model Deep Learning approach for segmentation of teeth and periapical lesions on Pantomographs

Oral Surg Oral Med Oral Pathol Oral Radiol. 2023 Nov 26:S2212-4403(23)00741-1. doi: 10.1016/j.oooo.2023.11.006. Online ahead of print.

Abstract

Introduction: The fields of medicine and dentistry are beginning to integrate Artificial Intelligence (AI) in diagnostics. This may reduce subjectivity and improve the accuracy of diagnoses and treatment planning. Current evidence on pathosis detection on Pantomographs(PGs) indicates the presence or absence of disease in the entire radiographic image, with little evidence of the relation of periapical pathosis to the causative tooth.

Objective: To develop a Deep Learning (DL) AI model for the segmentation of periapical pathosis and its relation to teeth on PGs.

Method: 250 PGs were manually annotated by subject experts to lay down the ground truth for training AI algorithms on the segmentation of periapical pathosis. Two approaches were used for lesion detection: Multi-models 1 and 2, using U-net and Mask RCNN algorithms, respectively. The resulting segmented lesions generated on the testing data set were superimposed with results of teeth segmentation and numbering algorithms trained separately to relate lesions to causative teeth. Hence, both multi-model approaches related periapical pathosis to the causative teeth on PGs.

Results: The performance metrics of lesion segmentation carried out by U-net are as follows: Accuracy = 98.1%, precision = 84.5%, re-call = 80.3%, F-1 score = 82.2%, dice index = 75.2%, and Intersection over Union = 67.6%. Mask RCNN carried out lesion segmentation with an accuracy of 46.7%, precision of 80.6%, recall of 55%, and F-1 score of 63.1%.

Conclusion: In this study, the multi-model approach successfully related periapical pathosis to the causative tooth on PGs. However, U-net outperformed Mask RCNN in the tasks performed, suggesting that U-net will remain the standard for medical image segmentation tasks. Further training of the models on other findings and an increased number of images will lead to the automation of the detection of common radiographic findings in the dental diagnostic workflow.