Surface-Electromyography-Based Gesture Recognition by Multi-View Deep Learning

IEEE Trans Biomed Eng. 2019 Oct;66(10):2964-2973. doi: 10.1109/TBME.2019.2899222. Epub 2019 Feb 13.

Abstract

Gesture recognition using sparse multichannel surface electromyography (sEMG) is a challenging problem, and the solutions are far from optimal from the point of view of muscle-computer interface. In this paper, we address this problem from the context of multi-view deep learning. A novel multi-view convolutional neural network (CNN) framework is proposed by combining classical sEMG feature sets with a CNN-based deep learning model. The framework consists of two parts. In the first part, multi-view representations of sEMG are modeled in parallel by a multistream CNN, and a performance-based view construction strategy is proposed to choose the most discriminative views from classical feature sets for sEMG-based gesture recognition. In the second part, the learned multi-view deep features are fused through a view aggregation network composed of early and late fusion subnetworks, taking advantage of both early and late fusion of learned multi-view deep features. Evaluations on 11 sparse multichannel sEMG databases as well as five databases with both sEMG and inertial measurement unit data demonstrate that our multi-view framework outperforms single-view methods on both unimodal and multimodal sEMG data streams.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Datasets as Topic
  • Deep Learning*
  • Electromyography / methods*
  • Gestures*
  • Humans
  • User-Computer Interface*