Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning

Sensors (Basel). 2022 Dec 7;22(24):9569. doi: 10.3390/s22249569.

Abstract

Heart sounds convey important information regarding potential heart diseases. Currently, heart sound classification attracts many researchers from the fields of telemedicine, digital signal processing, and machine learning-among others-mainly to identify cardiac pathology as quickly as possible. This article proposes chaogram as a new transform to convert heart sound signals to colour images. In the proposed approach, the output image is, therefore, the projection of the reconstructed phase space representation of the phonocardiogram (PCG) signal on three coordinate planes. This has two major benefits: (1) it makes possible to apply deep convolutional neural networks to heart sounds and (2) it is also possible to employ a transfer learning scheme by converting a heart sound signal to an image. The performance of the proposed approach was verified on the PhysioNet dataset. Due to the imbalanced data on this dataset, it is common to assess the results quality using the average of sensitivity and specificity, which is known as score, instead of accuracy. In this study, the best results were achieved using the InceptionV3 model, which achieved a score of 88.06%.

Keywords: biomedical signal; deep learning; phonocardiogram; signal to image transform.

MeSH terms

  • Heart Diseases*
  • Heart Sounds*
  • Humans
  • Machine Learning
  • Neural Networks, Computer
  • Signal Processing, Computer-Assisted

Grants and funding

This research received no external funding.