Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots

Sensors (Basel). 2023 Mar 10;23(6):2998. doi: 10.3390/s23062998.

Abstract

The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots.

Keywords: human–robot interaction; machine learning; motion intention detection; sensory fusion; upper limb assistive robots; wearable sensors.

MeSH terms

  • Electromyography / methods
  • Humans
  • Intention
  • Motion
  • Robotics*
  • Upper Extremity / physiology