Exploring the Camera Bias of Person Re-identification
Song, Myungseo, Park, Jin-Woo, Lee, Jong-Seok
–arXiv.org Artificial Intelligence
We empirically investigate the camera bias of person re-identification (ReID) models. Previously, camera-aware methods have been proposed to address this issue, but they are largely confined to training domains of the models. We measure the camera bias of ReID models on unseen domains and reveal that camera bias becomes more pronounced under data distribution shifts. As a debiasing method for unseen domain data, we revisit feature normalization on embedding vectors. While the normalization has been used as a straightforward solution, its underlying causes and broader applicability remain unexplored. We analyze why this simple method is effective at reducing bias and show that it can be applied to detailed bias factors such as low-level image properties and body angle. In addition, we explore the inherent risk of camera bias in unsupervised learning of ReID models. The unsupervised models remain highly biased towards camera labels even for seen domain data, indicating substantial room for improvement. Based on observations of the negative impact of camera-biased pseudo labels on training, we suggest simple training strategies to mitigate the bias. By applying these strategies to existing unsupervised learning algorithms, we show that significant performance improvements can be achieved with minor modifications. Person re-identification (ReID) is a process of retrieving images of a query identity from gallery images. With recent advances in deep learning, a wide range of challenging ReID scenarios have been covered, including object occlusion (Miao et al., 2019; Somers et al., 2023), change of appearance (Jin et al., 2022), and infrared images (Wu et al., 2017; Wu & Ye, 2023). In general, the inter-camera sample matching is not trivial since the shared information among images from the same camera can mislead a model easily.
arXiv.org Artificial Intelligence
Feb-14-2025