An adaptive reinforcement learning-based multimodal data fusion framework for human-robot confrontation gaming

Neural Netw. 2023 Jul:164:489-496. doi: 10.1016/j.neunet.2023.04.043. Epub 2023 May 6.

Abstract

Playing games between humans and robots have become a widespread human-robot confrontation (HRC) application. Although many approaches were proposed to enhance the tracking accuracy by combining different information, the problems of the intelligence degree of the robot and the anti-interference ability of the motion capture system still need to be solved. In this paper, we present an adaptive reinforcement learning (RL) based multimodal data fusion (AdaRL-MDF) framework teaching the robot hand to play Rock-Paper-Scissors (RPS) game with humans. It includes an adaptive learning mechanism to update the ensemble classifier, an RL model providing intellectual wisdom to the robot, and a multimodal data fusion structure offering resistance to interference. The corresponding experiments prove the mentioned functions of the AdaRL-MDF model. The comparison accuracy and computational time show the high performance of the ensemble model by combining k-nearest neighbor (k-NN) and deep convolutional neural network (DCNN). In addition, the depth vision-based k-NN classifier obtains a 100% identification accuracy so that the predicted gestures can be regarded as the real value. The demonstration illustrates the real possibility of HRC application. The theory involved in this model provides the possibility of developing HRC intelligence.

Keywords: Adaptive learning; Hand Gesture Recognition; Human–robot confrontation; Multimodal data fusion; Multiple sensors fusion; Reinforcement learning.

MeSH terms

  • Humans
  • Learning
  • Neural Networks, Computer
  • Reinforcement, Psychology
  • Robotics*
  • Video Games*