Clinical Annotation and Segmentation Tool (CAST) Implementation for Dental Diagnostics

Cureus. 2023 Nov 13;15(11):e48734. doi: 10.7759/cureus.48734. eCollection 2023 Nov.

Abstract

Purpose This study aims to document the early stages of development of an unsupervised, deep learning-based clinical annotation and segmentation tool (CAST) capable of isolating clinically significant teeth in both intraoral photographs and their corresponding oral radiographs. Methods The dataset consisted of 172 intraoral photographs and 424 dental radiographs, manually annotated by two operators, augmented to yield 6258 images for training, 183 for validation, and 98 for testing. The training involved the use of an object detection model ('YOLOv8') combined with a feature extraction system ('Segment Anything Model'). This combination enabled the auto-annotation and segmentation of tooth-related features and lesions in both types of images without operator intervention. Outputs were further processed using a data relabelling tool ('X-AnyLabeling') enabling the option to manually reannotate erroneous data outputs through reinforcement learning. Results The trained object detection model achieved a mean average precision (mAP) of 77.4%, with precision and recall rates of 75.0% and 72.1%, respectively. The model was able to segment features from oral images annotated by polygonal boundaries better than radiological images annotated using bounding boxes. Conclusion The development of the auto-annotation and segmentation tool showed initial promise in automating the image labelling and segmentation process for intraoral images and radiographs. Further work is required to address the limitations.

Keywords: artificial intelligence; clinical photography; deep learning; digital dentistry; radiology.

Grants and funding

The research was supported by the University of Adelaide Early Grant Development Scheme (340-13133234)