Sleep CLIP: A Multimodal Sleep Staging Model Based on Sleep Signals and Sleep Staging Labels

Sensors (Basel). 2023 Aug 23;23(17):7341. doi: 10.3390/s23177341.

Abstract

Since the release of the contrastive language-image pre-training (CLIP) model designed by the OpenAI team, it has been applied in several fields owing to its high accuracy. Sleep staging is an important method of diagnosing sleep disorders, and the completion of sleep staging tasks with high accuracy has always remained the main goal of sleep staging algorithm designers. This study is aimed at designing a multimodal model based on the CLIP model that is more suitable for sleep staging tasks using sleep signals and labels. The pre-training efforts of the model involve five different training sets. Finally, the proposed method is tested on two training sets (EDF-39 and EDF-153), with accuracies of 87.3 and 85.4%, respectively.

Keywords: CLIP; multi-modal models; sleep stage.

MeSH terms

  • Algorithms
  • Humans
  • Sleep Stages
  • Sleep Wake Disorders*
  • Sleep*

Grants and funding

This research received no external funding.