Pain intensity estimation based on a spatial transformation and attention CNN

PLoS One. 2020 Aug 21;15(8):e0232412. doi: 10.1371/journal.pone.0232412. eCollection 2020.

Abstract

Models designed to detect abnormalities that reflect disease from facial structures are an emerging area of research for automated facial analysis, which has important potential value in smart healthcare applications. However, most of the proposed models directly analyze the whole face image containing the background information, and rarely consider the effects of the background and different face regions on the analysis results. Therefore, in view of these effects, we propose an end-to-end attention network with spatial transformation to estimate different pain intensities. In the proposed method, the face image is first provided as input to a spatial transformation network for solving the problem of background interference; then, the attention mechanism is used to adaptively adjust the weights of different face regions of the transformed face image; finally, a convolutional neural network (CNN) containing a Softmax function is utilized to classify the pain levels. The extensive experiments and analysis are conducted on the benchmarking and publicly available database, namely the UNBC-McMaster shoulder pain. More specifically, in order to verify the superiority of our proposed method, the comparisons with the basic CNNs and the-state-of-the-arts are performed, respectively. The experiments show that the introduced spatial transformation and attention mechanism in our method can significantly improve the estimation performances and outperform the-state-of-the-arts.

MeSH terms

  • Attention*
  • Humans
  • Image Processing, Computer-Assisted
  • Neural Networks, Computer*
  • Pain Perception*

Grants and funding

The authors received no specific funding for this work.