Analysis and prediction of acoustic speech features from mel-frequency cepstral coefficients in distributed speech recognition architectures

J Acoust Soc Am. 2008 Dec;124(6):3989-4000. doi: 10.1121/1.2997436.

Abstract

The aim of this work is to develop methods that enable acoustic speech features to be predicted from mel-frequency cepstral coefficient (MFCC) vectors as may be encountered in distributed speech recognition architectures. The work begins with a detailed analysis of the multiple correlation between acoustic speech features and MFCC vectors. This confirms the existence of correlation, which is found to be higher when measured within specific phonemes rather than globally across all speech sounds. The correlation analysis leads to the development of a statistical method of predicting acoustic speech features from MFCC vectors that utilizes a network of hidden Markov models (HMMs) to localize prediction to specific phonemes. Within each HMM, the joint density of acoustic features and MFCC vectors is modeled and used to make a maximum a posteriori prediction. Experimental results are presented across a range of conditions, such as with speaker-dependent, gender-dependent, and gender-independent constraints, and these show that acoustic speech features can be predicted from MFCC vectors with good accuracy. A comparison is also made against an alternative scheme that substitutes the higher-order MFCCs with acoustic features for transmission. This delivers accurate acoustic features but at the expense of a significant reduction in speech recognition accuracy.

Publication types

  • Comparative Study

MeSH terms

  • Female
  • Humans
  • Male
  • Models, Biological*
  • Pattern Recognition, Physiological*
  • Phonetics*
  • Psychoacoustics
  • Recognition, Psychology*
  • Reproducibility of Results
  • Sex Factors
  • Signal Detection, Psychological*
  • Speech Acoustics*
  • Speech Intelligibility*
  • Speech Perception*