Display-Semantic Transformer for Scene Text Recognition

Sensors (Basel). 2023 Sep 28;23(19):8159. doi: 10.3390/s23198159.

Abstract

Linguistic knowledge helps a lot in scene text recognition by providing semantic information to refine the character sequence. The visual model only focuses on the visual texture of characters without actively learning linguistic information, which leads to poor model recognition rates in some noisy (distorted and blurry, etc.) images. In order to address the aforementioned issues, this study builds upon the most recent findings of the Vision Transformer, and our approach (called Display-Semantic Transformer, or DST for short) constructs a masked language model and a semantic visual interaction module. The model can mine deep semantic information from images to assist scene text recognition and improve the robustness of the model. The semantic visual interaction module can better realize the interaction between semantic information and visual features. In this way, the visual features can be enhanced by the semantic information so that the model can achieve a better recognition effect. The experimental results show that our model improves the average recognition accuracy on six benchmark test sets by nearly 2% compared to the baseline. Our model retains the benefits of having a small number of parameters and allows for fast inference speed. Additionally, it attains a more optimal balance between accuracy and speed.

Keywords: cross-modal attention; linguistic knowledge; scene text recognition; transformer; visual information.

Grants and funding

This study was supported by Research on Basic Theory and Key Technology of Discrete Intelligent Manufacturing Based on Industrial Big Data U1911401 2020.01–2023.12 National Natural Science Foundation of China (NSFC) Joint Fund Project and Research on Key Technology of Uyghur-Chinese Speech Translation System, U1603262, National Natural Science Foundation of China (NNSFC) Joint Fund, 2.42 Million.