Sound-Based Localization Using LSTM Networks for Visually Impaired Navigation

Sensors (Basel). 2023 Apr 17;23(8):4033. doi: 10.3390/s23084033.

Abstract

In this work, we developed a prototype that adopted sound-based systems for localization of visually impaired individuals. The system was implemented based on a wireless ultrasound network, which helped the blind and visually impaired to navigate and maneuver autonomously. Ultrasonic-based systems use high-frequency sound waves to detect obstacles in the environment and provide location information to the user. Voice recognition and long short-term memory (LSTM) techniques were used to design the algorithms. The Dijkstra algorithm was also used to determine the shortest distance between two places. Assistive hardware tools, which included an ultrasonic sensor network, a global positioning system (GPS), and a digital compass, were utilized to implement this method. For indoor evaluation, three nodes were localized on the doors of different rooms inside the house, including the kitchen, bathroom, and bedroom. The coordinates (interactive latitude and longitude points) of four outdoor areas (mosque, laundry, supermarket, and home) were identified and stored in a microcomputer's memory to evaluate the outdoor settings. The results showed that the root mean square error for indoor settings after 45 trials is about 0.192. In addition, the Dijkstra algorithm determined that the shortest distance between two places was within an accuracy of 97%.

Keywords: indoor outdoor navigation; long short-term memory; sound source localization; visually impaired people; voice recognition.

MeSH terms

  • Algorithms
  • Geographic Information Systems
  • Humans
  • Self-Help Devices*
  • Ultrasonography
  • Visually Impaired Persons*