Fly-LeNet: A deep learning-based framework for converting multilingual braille images

Heliyon. 2024 Feb 14;10(4):e26155. doi: 10.1016/j.heliyon.2024.e26155. eCollection 2024 Feb 29.

Abstract

For many years, braille-assistive technologies have aided blind individuals in reading, writing, learning, and communicating with sighted individuals. These technologies have been instrumental in promoting inclusivity and breaking down communication barriers in the lives of blind people. One of these technologies is the Optical Braille Recognition (OBR) system, which facilitates communication between sighted and blind individuals. However, current OBR systems have a gap in their ability to convert braille documents into multilingual texts, making it challenging for sighted individuals to learn braille for self-learning-based uses. To address this gap, we recommend a segmentation and deep learning-based approach named Fly-LeNet that converts braille images into multilingual texts. The approach includes image acquisition, preprocessing, and segmentation using the Mayfly optimization approach with a thresholding method and a braille multilingual mapping step. It uses a deep learning model, LeNet-5, that recognizes braille cells. We evaluated the performance of the Fly-LeNet through several experiments on two datasets of braille images. Dataset-1 consists of 1404 labeled samples of 27 braille signs demonstrating the alphabet letters, while Dataset-2 comprises 5420 labeled samples of 37 braille symbols representing alphabets, numbers, and punctuations, among which we used 2000 samples for cross-validation. The suggested model achieved a high classification accuracy of 99.77% and 99.80% on the test sets of the first and second datasets, respectively. The results demonstrate the potential of Fly-LeNet for multilingual braille transformation, enabling effective communication with sighted individuals.

Keywords: Braille; Deep learning (DL); Multilingual braille images.