Sign Language Motion Generation from Sign Characteristics

Sensors (Basel). 2023 Nov 23;23(23):9365. doi: 10.3390/s23239365.

Abstract

This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.

Keywords: HamNoSys; interpolation; landmarks extraction; motion dataset; motion generation; padding strategies; sign language; sign phonemes.

MeSH terms

  • Algorithms*
  • Hand
  • Humans
  • Motion
  • Movement
  • Sign Language*

Grants and funding

M. Villa-Monedero’s scholarship has been supported by Amazon through the IPTC-Amazon collaboration.