A comparison of Arabic sign language dynamic gesture recognition models

Heliyon. 2020 Mar 14;6(3):e03554. doi: 10.1016/j.heliyon.2020.e03554. eCollection 2020 Mar.

Abstract

Arabic Sign Language (ArSL) is similar to other sign languages in terms of the way it is gestured and interpreted and used as a medium of communication among the hearing-impaired and the communities in which they live in. Research investigating sensor utilization and natural user interfaces to facilitate ArSL recognition and interpretation, is lacking. Previous research has demonstrated that there is not a single classifier modeling approach that can be suitable for all hand gesture recognition tasks, therefore, this research investigated which combination of algorithms, set with different parameters used with a sensor device, produce higher ArSL recognition accuracy results in a gesture recognition system. This research proposed a dynamic prototype model (DPM) using Kinect as a sensor to recognize certain ArSL gestured dynamic words. The DPM used eleven predictive models of three algorithms (SVM, RF, KNN) based on different parameter settings. Research findings indicated that highest recognition accuracy rates for the dynamic words gestured were achieved by the SVM models, with linear kernel and cost parameter = 0.035.

Keywords: Arabic sign language; Classification; Computer science; Dynamic gesture recognition models; Machine learning.