AAformer: Auto-Aligned Transformer for Person Re-Identification

IEEE Trans Neural Netw Learn Syst. 2023 Aug 25:PP. doi: 10.1109/TNNLS.2023.3301856. Online ahead of print.

Abstract

In person re-identification (re-ID), extracting part-level features from person images has been verified to be crucial to offer fine-grained information. Most of the existing CNN-based methods only locate the human parts coarsely, or rely on pretrained human parsing models and fail in locating the identifiable nonhuman parts (e.g., knapsack). In this article, we introduce an alignment scheme in transformer architecture for the first time and propose the auto-aligned transformer (AAformer) to automatically locate both the human parts and nonhuman ones at patch level. We introduce the "Part tokens (PARTs)", which are learnable vectors, to extract part features in the transformer. A PART only interacts with a local subset of patches in self-attention and learns to be the part representation. To adaptively group the image patches into different subsets, we design the auto-alignment. Auto-alignment employs a fast variant of optimal transport (OT) algorithm to online cluster the patch embeddings into several groups with the PARTs as their prototypes. AAformer integrates the part alignment into the self-attention and the output PARTs can be directly used as part features for retrieval. Extensive experiments validate the effectiveness of PARTs and the superiority of AAformer over various state-of-the-art methods.