American Sign Language Alphabet Recognition Using a Neuromorphic Sensor and an Artificial Neural Network

Sensors (Basel). 2017 Sep 22;17(10):2176. doi: 10.3390/s17102176.

Abstract

This paper reports the design and analysis of an American Sign Language (ASL) alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS) sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.

Keywords: contour detection; feature extraction; hand posture recognition.

MeSH terms

  • Algorithms
  • Humans
  • Image Processing, Computer-Assisted / instrumentation*
  • Neural Networks, Computer*
  • Sign Language*
  • Software
  • United States