Segmenting Cardiac Ultrasound Videos Using Self-Supervised Learning

Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul:2023:1-7. doi: 10.1109/EMBC40787.2023.10340526.

Abstract

Deep learning models trained with an insufficient volume of data can often fail to generalize between different equipment, clinics, and clinicians or fail to achieve acceptable performance. We improve cardiac ultrasound segmentation models using unlabeled data to learn recurrent anatomical representations via self-supervision. In addition, we leverage supervised local contrastive learning on sparse labels to improve the segmentation and reduce the need for large amounts of dense pixel-level supervisory annotations. Then, we implement supervised fine-tuning to segment key temporal anatomical features to estimate the cardiac Ejection Fraction (EF). We show that pretraining the network weights using self-supervised learning for subsequent supervised contrastive learning outperforms learning from scratch, validated using two state-of-the-art segmentation models, the DeepLabv3+ and Attention U-Net.Clinical relevance-This work has clinical relevance for assisting physicians when conducting cardiac function evaluations. We improve cardiac ejection fraction evaluation compared to previous methods, helping to alleviate the burden associated with acquiring labeled images.

MeSH terms

  • Echocardiography*
  • Humans
  • Physical Examination
  • Physicians*
  • Supervised Machine Learning
  • Videotape Recording