A hybrid technique for speech segregation and classification using a sophisticated deep neural network

PLoS One. 2018 Mar 20;13(3):e0194151. doi: 10.1371/journal.pone.0194151. eCollection 2018.

Abstract

Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods.

MeSH terms

  • Algorithms*
  • Databases, Factual*
  • Humans
  • Neural Networks, Computer*
  • Speech Recognition Software*

Grants and funding

The author(s) received no specific funding for this work.