Channel semantic mutual learning for visible-thermal person re-identification

PLoS One. 2024 Jan 19;19(1):e0293498. doi: 10.1371/journal.pone.0293498. eCollection 2024.

Abstract

Visible-infrared person re-identification (VI-ReID) is a cross-modality retrieval issue aiming to match the same pedestrian between visible and infrared cameras. Thus, the modality discrepancy presents a significant challenge for this task. Most methods employ different networks to extract features that are invariant between modalities. While we propose a novel channel semantic mutual learning network (CSMN), which attributes the difference in semantics between modalities to the difference at the channel level, it optimises the semantic consistency between channels from two perspectives: the local inter-channel semantics and the global inter-modal semantics. Meanwhile, we design a channel-level auto-guided double metric loss (CADM) to learn modality-invariant features and the sample distribution in a fine-grained manner. We conducted experiments on RegDB and SYSU-MM01, and the experimental results validate the superiority of CSMN. Especially on RegDB datasets, CSMN improves the current best performance by 3.43% and 0.5% on the Rank-1 score and mINP value, respectively. The code is available at https://github.com/013zyj/CSMN.

MeSH terms

  • Humans
  • Learning
  • Pedestrians*
  • Semantic Web
  • Semantics*

Grants and funding

This is supported by the [Research and Application of Multilingual and Multimodal Information Content Security] grant number [No.202304120002],[National Natural Science Foundation of China] grant number [No.202204120017], [Autonomous Region Special Research and Development Task] grant number [No. 2022B01008-2], [Autonomous Region Major Science and Technology Special Project] grant number [No. 2020A02001-1] and [Optimization of low-resolution device defect recognition algorithm based on image enhancement] grant number [No. SGXJXTOOJFJS2200076]. The funder's role in this research includes study design and decision to publish.