Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network

Int J Environ Res Public Health. 2023 Jan 15;20(2):1588. doi: 10.3390/ijerph20021588.

Abstract

Audio features are physical features that reflect single or complex coordinated movements in the vocal organs. Hence, in speech-based automatic depression classification, it is critical to consider the relationship among audio features. Here, we propose a deep learning-based classification model for discriminating depression and its severity using correlation among audio features. This model represents the correlation between audio features as graph structures and learns speech characteristics using a graph convolutional neural network. We conducted classification experiments in which the same subjects were allowed to be included in both the training and test data (Setting 1) and the subjects in the training and test data were completely separated (Setting 2). The results showed that the classification accuracy in Setting 1 significantly outperformed existing state-of-the-art methods, whereas that in Setting 2, which has not been presented in existing studies, was much lower than in Setting 1. We conclude that the proposed model is an effective tool for discriminating recurring patients and their severities, but it is difficult to detect new depressed patients. For practical application of the model, depression-specific speech regions appearing locally rather than the entire speech of depressed patients should be detected and assigned the appropriate class labels.

Keywords: audio feature; classification model; correlation; depression; graph convolutional neural network.

MeSH terms

  • Depression* / diagnosis
  • Humans
  • Neural Networks, Computer*
  • Speech

Grants and funding

This research received no external funding.