MFA: A Smart Glove with Multimodal Intent Sensing Capability

Comput Intell Neurosci. 2022 Jul 11:2022:3545850. doi: 10.1155/2022/3545850. eCollection 2022.

Abstract

At present, virtual-reality fusion smart experiments mostly employ visual perception devices to collect user behavior data, but this method faces the obstacles of distance, angle, occlusion, light, and a variety of other factors of indoor interactive input devices. Moreover, the essence of the traditional multimodal fusion algorithm (TMFA) is to analyze the user's experimental intent serially using single-mode information, which cannot fully utilize the intent information of each mode. Therefore, this paper designs a multimodal fusion algorithm (hereinafter referred to as MFA, Algorithm 4) which focuses on the parallel fusion of user's experimental intent. The essence of the MFA is the fusion of multimodal intent probability. At the same time, this paper designs a smart glove based on the virtual-reality fusion experiments, which can integrate multichannel data such as voice, visual, and sensor. This smart glove can not only capture user's experimental intent but also navigate, guide, or warn user's operation behaviors, and it has stronger perception capabilities compared to any other data glove or smart experimental device. The experimental results demonstrate that the smart glove presented in this paper can be widely employed in the chemical experiment teaching based on virtual-reality fusion.

MeSH terms

  • Algorithms*
  • Intention
  • Virtual Reality*
  • Visual Perception