Food intake monitoring: an acoustical approach to automated food intake activity detection and classification of consumed food

Physiol Meas. 2012 Jun;33(6):1073-93. doi: 10.1088/0967-3334/33/6/1073. Epub 2012 May 24.

Abstract

Obesity and nutrition-related diseases are currently growing challenges for medicine. A precise and timesaving method for food intake monitoring is needed. For this purpose, an approach based on the classification of sounds produced during food intake is presented. Sounds are recorded non-invasively by miniature microphones in the outer ear canal. A database of 51 participants eating seven types of food and consuming one drink has been developed for algorithm development and model training. The database is labeled manually using a protocol with introductions for annotation. The annotation procedure is evaluated using Cohen's kappa coefficient. The food intake activity is detected by the comparison of the signal energy of in-ear sounds to environmental sounds recorded by a reference microphone. Hidden Markov models are used for the recognition of single chew or swallowing events. Intake cycles are modeled as event sequences in finite-state grammars. Classification of consumed food is realized by a finite-state grammar decoder based on the Viterbi algorithm. We achieved a detection accuracy of 83% and a food classification accuracy of 79% on a test set of 10% of all records. Our approach faces the need of monitoring the time and occurrence of eating. With differentiation of consumed food, a first step toward the goal of meal weight estimation is taken.

Publication types

  • Clinical Trial
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustics*
  • Adolescent
  • Adult
  • Aged
  • Algorithms
  • Automation
  • Databases as Topic
  • Eating / physiology*
  • Female
  • Food / classification*
  • Hearing Aids
  • Humans
  • Male
  • Middle Aged
  • Monitoring, Physiologic / methods*
  • Observer Variation
  • Reproducibility of Results