Hand gesture recognition via deep data optimization and 3D reconstruction

PeerJ Comput Sci. 2023 Oct 30:9:e1619. doi: 10.7717/peerj-cs.1619. eCollection 2023.

Abstract

Hand gesture recognition (HGR) are the most significant tasks for communicating with the real-world environment. Recently, gesture recognition has been extensively utilized in diverse domains, including but not limited to virtual reality, augmented reality, health diagnosis, and robot interaction. On the other hand, accurate techniques typically utilize various modalities generated from RGB input sequences, such as optical flow which acquires the motion data in the images and videos. However, this approach impacts real-time performance due to its demand of substantial computational resources. This study aims to introduce a robust and effective approach to hand gesture recognition. We utilize two publicly available benchmark datasets. Initially, we performed preprocessing steps, including denoising, foreground extraction, and hand detection via associated component techniques. Next, hand segmentation is done to detect landmarks. Further, we utilized three multi-fused features, including geometric features, 3D point modeling and reconstruction, and angular point features. Finally, grey wolf optimization served useful features of artificial neural networks for hand gesture recognition. The experimental results have shown that the proposed HGR achieved significant recognition of 89.92% and 89.76% over IPN hand and Jester datasets, respectively.

Keywords: Grey wolf optimization (GWO); Hand gesture recognition (HGR); Leave-one-subject-out (LOSO); Machine learning (ML).

Grants and funding

The authors received no funding for this work.