HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

IEEE J Biomed Health Inform. 2022 Mar;26(3):939-951. doi: 10.1109/JBHI.2021.3085602. Epub 2022 Mar 7.

Abstract

Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Human Activities*
  • Humans
  • Machine Learning
  • Neural Networks, Computer
  • Smartphone
  • Wearable Electronic Devices*