Audio-visual active speaker tracking in cluttered indoors environments

IEEE Trans Syst Man Cybern B Cybern. 2008 Jun;38(3):799-807. doi: 10.1109/TSMCB.2008.922063.

Abstract

We propose a system for detecting the active speaker in cluttered and reverberant environments where more than one person speaks and moves. Rather than using only audio information, the system utilizes audiovisual information from multiple acoustic and video sensors that feed separate audio and video tracking modules. The audio module operates using a particle filter (PF) and an information-theoretic framework to provide accurate acoustic source location under reverberant conditions. The video subsystem combines in 3-D a number of 2-D trackers based on a variation of Stauffer's adaptive background algorithm with spatiotemporal adaptation of the learning parameters and a Kalman tracker in a feedback configuration. Extensive experiments show that gains are to be expected when fusion of the separate modalities is performed to detect the active speaker.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Biometry / methods*
  • Environment*
  • Image Interpretation, Computer-Assisted / methods*
  • Sound Spectrography / methods*
  • Speech Recognition Software*