Reconstructed SqueezeNext with C-CBAM for offline handwritten Chinese character recognition

PeerJ Comput Sci. 2023 Aug 14:9:e1529. doi: 10.7717/peerj-cs.1529. eCollection 2023.

Abstract

Background: Handwritten Chinese character recognition (HCCR) is a difficult problem in character recognition. Chinese characters are diverse and many of them are very similar. The HCCR model consumes a large number of computational resources during runtime, making it difficult to deploy to resource-limited development platforms.

Methods: In order to reduce the computational consumption and improve the operational efficiency of such models, an improved lightweight HCCR model is proposed in this article. We reconstructed the basic modules of the SqueezeNext network so that the model would be compatible with the introduced attention module and model compression techniques. The proposed Cross-stage Convolutional Block Attention Module (C-CBAM) redeploys the Spatial Attention Module (SAM) and the Channel Attention Module (CAM) according to the feature map characteristics of the deep and shallow layers of the model, targeting enhanced information interaction between the deep and shallow layers. The reformulated intra-stage convolutional kernel importance assessment criterion integrates the normalization nature of the weights and allows for structured pruning in equal proportions for each stage of the model. The quantization aware training is able to map the 32-bit floating-point weights in the pruned model to 8-bit fixed-point weights with minor loss.

Results: Pruning with the new convolutional kernel importance evaluation criterion proposed in this article can achieve a pruning rate of 50.79% with little impact on the accuracy rate. The various optimization methods can compress the model to 1.06 MB and achieve an accuracy of 97.36% on the CASIA-HWDB dataset. Compared with the initial model, the volume is reduced by 87.15%, and the accuracy is improved by 1.71%. The model proposed in this article greatly reduces the running time and storage requirements of the model while maintaining accuracy.

Keywords: Attention model; CNN; Character recognition; Lightweight model.

Grants and funding

This research was funded by the Graduate Research and Innovation Projects of Jiangsu Province under grant number SJCX21 1517 and SJCX22 1685, the Major Basic Research Project of the Natural Science Foundation of the Jiangsu Higher Education Institutions under grant number 19KJA110002, the Natural Science Foundation of China under grant number No. 61673108 and the Yancheng Institute of Technology High level Talent Research Initiation Project under grant number XJR2022001. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.