Exploring human activity recognition using feature level fusion of inertial and electromyography data

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul:2022:1766-1769. doi: 10.1109/EMBC48229.2022.9870909.

Abstract

Wearables are objective tools for human activity recognition (HAR). Advances in wearables enable synchronized multi-sensing within a single device. This has resulted in studies investigating the use of single or multiple wearable sensor modalities for HAR. Some studies use inertial data, others use surface electromyography (sEMG) from multiple muscles and different post-processing approaches. Yet, questions remain about accuracies relating to e.g., multi-modal approaches, and sEMG post-processing. Here, we explored how inertial and sEMG could be efficiently combined with machine learning and used with post-processing methods for better HAR. This study aims recognition of four basic daily life activities; walking, standing, stair ascent and descent. Firstly, we created a new feature vector based on the domain knowledge gained from previous mobility studies. Then, a feature level data fusion approach was used to combine inertial and sEMG data. Finally, two supervised learning classifiers (Support Vector Machine, SVM, and the k-Nearest Neighbors, kNN) were tested with 5-fold cross-validation. Results show the use of inertial data with sEMG increased overall accuracy by 3.5% (SVM) and 6.3% (kNN). Extracting features from linear envelopes instead of bandpass filtered sEMG improves overall HAR accuracy in both classifiers. Clinical Relevance- Post-processing on sEMG signals can improve the performance of multimodal HAR.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Electromyography / methods
  • Human Activities*
  • Humans
  • Support Vector Machine*