3 directional Inception-ResUNet: Deep spatial feature learning for multichannel singing voice separation with distortion

PLoS One. 2024 Jan 29;19(1):e0289453. doi: 10.1371/journal.pone.0289453. eCollection 2024.

Abstract

Singing voice separation on robots faces the problem of interpreting ambiguous auditory signals. The acoustic signal, which the humanoid robot perceives through its onboard microphones, is a mixture of singing voice, music, and noise, with distortion, attenuation, and reverberation. In this paper, we used the 3D Inception-ResUNet structure in the U-shaped encoding and decoding network to improve the utilization of the spatial and spectral information of the spectrogram. Multiobjectives were used to train the model: magnitude consistency loss, phase consistency loss, and magnitude correlation consistency loss. We recorded the singing voice and accompaniment derived from the MIR-1K dataset with NAO robots and synthesized the 10-channel dataset for training the model. The experimental results show that the proposed model trained by multiple objectives reaches an average NSDR of 11.55 dB on the test dataset, which outperforms the comparison model.

MeSH terms

  • Acoustics
  • Music*
  • Singing*
  • Voice Quality

Grants and funding

DDW received the funding. This work was supported by the China University Industry, University and Research Innovation Fund grant number 2020ITA05025. http://www.cutech.edu.cn/cn/zxgz/2020/12/1607411786439009.htm. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.