Monaural room acoustic parameters from music and speech

J Acoust Soc Am. 2008 Jul;124(1):278-87. doi: 10.1121/1.2931960.

Abstract

This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustics*
  • Algorithms
  • Architecture*
  • Humans
  • Models, Theoretical
  • Music*
  • Speech*