S1 and S2 Heart Sound Recognition Using Deep Neural Networks

IEEE Trans Biomed Eng. 2017 Feb;64(2):372-380. doi: 10.1109/TBME.2016.2559800.

Abstract

Objective: This study focuses on the first (S1) and second (S2) heart sound recognition based only on acoustic characteristics; the assumptions of the individual durations of S1 and S2 and time intervals of S1-S2 and S2-S1 are not involved in the recognition process. The main objective is to investigate whether reliable S1 and S2 recognition performance can still be attained under situations where the duration and interval information might not be accessible.

Methods: A deep neural network (DNN) method is proposed for recognizing S1 and S2 heart sounds. In the proposed method, heart sound signals are first converted into a sequence of Mel-frequency cepstral coefficients (MFCCs). The K-means algorithm is applied to cluster MFCC features into two groups to refine their representation and discriminative capability. The refined features are then fed to a DNN classifier to perform S1 and S2 recognition. We conducted experiments using actual heart sound signals recorded using an electronic stethoscope. Precision, recall, F-measure, and accuracy are used as the evaluation metrics.

Results: The proposed DNN-based method can achieve high precision, recall, and F-measure scores with more than 91% accuracy rate.

Conclusion: The DNN classifier provides higher evaluation scores compared with other well-known pattern classification methods.

Significance: The proposed DNN-based method can achieve reliable S1 and S2 recognition performance based on acoustic characteristics without using an ECG reference or incorporating the assumptions of the individual durations of S1 and S2 and time intervals of S1-S2 and S2-S1.

MeSH terms

  • Female
  • Heart Auscultation / classification*
  • Heart Auscultation / methods*
  • Heart Sounds / physiology*
  • Humans
  • Male
  • Neural Networks, Computer*
  • Signal Processing, Computer-Assisted*
  • Stethoscopes