Integration of Multi-Head Self-Attention and Convolution for Person Re-Identification

Sensors (Basel). 2022 Aug 21;22(16):6293. doi: 10.3390/s22166293.

Abstract

Person re-identification is essential to intelligent video analytics, whose results affect downstream tasks such as behavior and event analysis. However, most existing models only consider the accuracy, rather than the computational complexity, which is also an aspect to consider in practical deployment. We note that self-attention is a powerful technique for representation learning. It can work with convolution to learn more discriminative feature representations for re-identification. We propose an improved multi-scale feature learning structure, DM-OSNet, with better performance than the original OSNet. Our DM-OSNet replaces the 9×9 convolutional stream in OSNet with multi-head self-attention. To maintain model efficiency, we use double-layer multi-head self-attention to reduce the computational complexity of the original multi-head self-attention. The computational complexity is reduced from the original O((H×W)2) to O(H×W×G2). To further improve the model performance, we use SpCL to perform unsupervised pre-training on the large-scale unlabeled pedestrian dataset LUPerson. Finally, our DM-OSNet achieves an mAP of 87.36%, 78.26%, 72.96%, and 57.13% on the Market1501, DukeMTMC-reID, CUHK03, and MSMT17 datasets.

Keywords: attention; person re-identification; surveillance.

MeSH terms

  • Humans
  • Learning
  • Neural Networks, Computer*
  • Pattern Recognition, Automated / methods
  • Pedestrians*

Grants and funding

This work was supported by Science and technology projects of State Grid Corporation (No.1400-202157214A-0-0-00).