LSTM-MSA: A Novel Deep Learning Model With Dual-Stage Attention Mechanisms Forearm EMG-Based Hand Gesture Recognition

IEEE Trans Neural Syst Rehabil Eng. 2023:31:4749-4759. doi: 10.1109/TNSRE.2023.3336865. Epub 2023 Dec 7.

Abstract

This paper introduces the Long Short-Term Memory with Dual-Stage Attention (LSTM-MSA) model, an approach for analyzing electromyography (EMG) signals. EMG signals are crucial in applications like prosthetic control, rehabilitation, and human-computer interaction, but they come with inherent challenges such as non-stationarity and noise. The LSTM-MSA model addresses these challenges by combining LSTM layers with attention mechanisms to effectively capture relevant signal features and accurately predict intended actions. Notable features of this model include dual-stage attention, end-to-end feature extraction and classification integration, and personalized training. Extensive evaluations across diverse datasets consistently demonstrate the LSTM-MSA's superiority in terms of F1 score, accuracy, recall, and precision. This research provides a model for real-world EMG signal applications, offering improved accuracy, robustness, and adaptability.

MeSH terms

  • Deep Learning*
  • Electromyography
  • Forearm*
  • Gestures
  • Humans
  • Upper Extremity