Adaptive environment classification system for hearing aids

J Acoust Soc Am. 2010 May;127(5):3124-35. doi: 10.1121/1.3365301.

Abstract

An adaptive sound classification framework is proposed for hearing aid applications. The long-term goal is to develop fully trainable instruments in which both the acoustical environments encountered in daily life and the hearing aid settings preferred by the user in each environmental class could be learned. Two adaptive classifiers are described, one based on minimum distance clustering and one on Bayesian classification. Through unsupervised learning, the adaptive systems allow classes to split or merge based on changes in the ongoing acoustical environments. Performance was evaluated using real-world sounds from a wide range of acoustical environments. The systems were first initialized using two classes, speech and noise, followed by a testing period when a third class, music, was introduced. Both systems were successful in detecting the presence of an additional class and estimating its underlying parameters, reaching a testing accuracy close to the target rates obtained from best-case scenarios derived from non-adaptive supervised versions of the classifiers (about 3% lower performance). The adaptive Bayesian classifier resulted in a 4% higher overall accuracy upon splitting adaptation than the minimum distance classifier. Merging accuracy was found to be the same in the two systems and within 1%-2% of the best-case supervised versions.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustics*
  • Algorithms
  • Artificial Intelligence
  • Automation
  • Bayes Theorem*
  • Cluster Analysis*
  • Female
  • Hearing Aids / classification*
  • Humans
  • Male
  • Models, Theoretical*
  • Music
  • Noise
  • Signal Processing, Computer-Assisted*
  • Speech Acoustics