Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition

Sensors (Basel). 2018 Sep 6;18(9):2967. doi: 10.3390/s18092967.

Abstract

Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user's context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders.

Keywords: adversarial learning; autoencoders; context detection; human activity recognition; imputation; sensor analytics.

MeSH terms

  • Human Activities*
  • Humans
  • Unsupervised Machine Learning*

Grants and funding