Label-free Medical Image Quality Evaluation by Semantics-aware Contrastive Learning in IoMT

IEEE J Biomed Health Inform. 2023 Dec 7:PP. doi: 10.1109/JBHI.2023.3340201. Online ahead of print.

Abstract

With the rapid development of the Internet-of-Medical-Things (IoMT) in recent years, it has emerged as a promising solution to alleviate the workload of medical staff, particularly in the field of Medical Image Quality Assessment (MIQA). By deploying MIQA based on IoMT, it proves to be highly valuable in assisting the diagnosis and treatment of various types of medical images, such as fundus images, ultrasound images, and dermoscopic images. However, traditional MIQA models necessitate a substantial number of labeled medical images to be effective, which poses a challenge in acquiring a sufficient training dataset. To address this issue, we present a label-free MIQA model developed through a zero-shot learning approach. This paper introduces a Semantics-Aware Contrastive Learning (SCL) model that can effectively generalise quality assessment to diverse medical image types. The proposed method integrates features extracted from zero-shot learning, the spatial domain, and the frequency domain. Zero-shot learning is achieved through a tailored Contrastive Language-Image Pre-training (CLIP) model. Natural Scene Statistics (NSS) and patch-based features are extracted in the spatial domain, while frequency features are hierarchically extracted from both local and global levels. All of this information is utilised to derive a final quality score for a medical image. To ensure a comprehensive evaluation, we not only utilise two existing datasets, EyeQ and LiverQ, but also create a dataset specifically for skin image quality assessment. As a result, our SCL method undergoes extensive evaluation using all three medical image quality datasets, demonstrating its superiority over advanced models.