GlottisNetV2: Temporal Glottal Midline Detection Using Deep Convolutional Neural Networks

IEEE J Transl Eng Health Med. 2023 Jan 19:11:137-144. doi: 10.1109/JTEHM.2023.3237859. eCollection 2023.

Abstract

High-speed videoendoscopy is a major tool for quantitative laryngology. Glottis segmentation and glottal midline detection are crucial for computing vocal fold-specific, quantitative parameters. However, fully automated solutions show limited clinical applicability. Especially unbiased glottal midline detection remains a challenging problem. We developed a multitask deep neural network for glottis segmentation and glottal midline detection. We used techniques from pose estimation to estimate the anterior and posterior points in endoscopy images. Neural networks were set up in TensorFlow/Keras and trained and evaluated with the BAGLS dataset. We found that a dual decoder deep neural network termed GlottisNetV2 outperforms the previously proposed GlottisNet in terms of MAPE on the test dataset (1.85% to 6.3%) while converging faster. Using various hyperparameter tunings, we allow fast and directed training. Using temporal variant data on an additional data set designed for this task, we can improve the median prediction accuracy from 2.1% to 1.76% when using 12 consecutive frames and additional temporal filtering. We found that temporal glottal midline detection using a dual decoder architecture together with keypoint estimation allows accurate midline prediction. We show that our proposed architecture allows stable and reliable glottal midline predictions ready for clinical use and analysis of symmetry measures.

Keywords: Laryngeal endoscopy; biomedical imaging; deep learning; deep neural networks; glottis; midline.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Endoscopy
  • Glottis*
  • Neural Networks, Computer
  • Vocal Cords*

Grants and funding

This work was supported by the Deutsche Forschungsgesellschaft (DFG) under Grant DO1247/8-2 and Grant SCHU3441/3-2.