Papers Recommended

【ICCV】Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation

Publication Date:2022-07-17     Return

Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation

Shared by:Jianbing Wu
Research direction:Person ReID
Title:Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation
Authors:Xin Hao, Sanyuan Zhao, Mang Ye, Jianbing Shen
Institution:Beijing Institute of Technology
Abstract:Cross-modality person re-identification is a challenging task due to large cross-modality discrepancy and intramodality variations. Currently, most existing methods focus on learning modality-specific or modality-shareable features by using the identity supervision or modality label. Different from existing methods, this paper presents a novel Modality Confusion Learning Network (MCLNet). Its basic idea is to confuse two modalities, ensuring that the optimization is explicitly concentrated on the modalityirrelevant perspective. Specifically, MCLNet is designed to learn modality-invariant features by simultaneously minimizing inter-modality discrepancy while maximizing crossmodality similarity among instances in a single framework. Furthermore, an identity-aware marginal center aggregation strategy is introduced to extract the centralization features, while keeping diversity with a marginal constraint. Finally, we design a camera-aware learning scheme to enrich the discriminability. Extensive experiments on SYSUMM01 and RegDB datasets show that MCLNet outperforms the state-of-the-art by a large margin. On the large-scale SYSU-MM01 dataset, our model can achieve 65.40% and 61.98% in terms of Rank-1 accuracy and mAP value.
Article link

click here