Graph-based feature extraction: A new proposal to study the classification of music signals outside the time-frequency domain

PLoS One. 2020 Nov 12;15(11):e0240915. doi: 10.1371/journal.pone.0240915. eCollection 2020.

Abstract

Most feature extraction algorithms for music audio signals use Fourier transforms to obtain coefficients that describe specific aspects of music information within the sound spectrum, such as the timbral texture, tonal texture and rhythmic activity. In this paper, we introduce a new method for extracting features related to the rhythmic activity of music signals using the topological properties of a graph constructed from an audio signal. We map the local standard deviation of a music signal to a visibility graph and calculate the modularity (Q), the number of communities (Nc), the average degree (〈k〉), and the density (Δ) of this graph. By applying this procedure to each signal in a database of various musical genres, we detected the existence of a hierarchy of rhythmic self-similarities between musical styles given by these four network properties. Using Q, Nc, 〈k〉 and Δ as input attributes in a classification experiment based on supervised artificial neural networks, we obtained an accuracy higher than or equal to the beat histogram in 70% of the musical genre pairs, using only four features from the networks. Finally, when performing the attribute selection test with Q, Nc, 〈k〉 and Δ, along with the main signal processing field descriptors, we found that the four network properties were among the top-ranking positions given by this test.

MeSH terms

  • Acoustics
  • Algorithms
  • Computer Graphics
  • Databases, Factual
  • Humans
  • Music*
  • Neural Networks, Computer
  • Signal Processing, Computer-Assisted
  • Supervised Machine Learning

Grants and funding

Unfunded studies.