Emotions in [a]: a perceptual and acoustic study

Logoped Phoniatr Vocol. 2006;31(1):43-8. doi: 10.1080/14015430500293926.

Abstract

The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Emotions*
  • Female
  • Humans
  • Male
  • Middle Aged
  • Psycholinguistics
  • Recognition, Psychology
  • Speech Acoustics*
  • Speech Perception / physiology*
  • Voice