Goto

Collaborating Authors

 hybrid memory




Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Neural Information Processing Systems

Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks.


Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Supplementary Material

Neural Information Processing Systems

Dapeng Chen is the corresponding author. The initial learning rate is set to 0.00035 and is decreased to 1/10 of its previous value every 20 epochs in the total 50 epochs. Table 7, significant 4.8% mAP improvements can be observed when applying the self-paced learning What is interesting is that the final performance is even better than that on DBSCAN. Experiments are conducted on the tasks of unsupervised person re-ID. Market-1501, and the chosen hyper-parameters are directly applied to all the other tasks.



MoCo alone underperforms because it treats

Neural Information Processing Systems

MoCo is good at unsupervised pre-training but its resulting networks need finetuning with (pseudo) class labels. G GPU memory, 200,000+ instances can be easily stored. We added experiments on MSMT17 as suggested. We will look into more theories in future studies. Our self-paced strategy dynamically determines confident clusters and un-clustered instances.


Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Neural Information Processing Systems

Weaknesses: - The main idea of this method is unified contrastive learning. However, the strategy of joint learning of source and target domain is not new although different methods implement with different losses (e.g., in [57,58]). It is also natural that the performance on source domain with joint learning of source and target domains is higher than finetuing with target data only. Besides, the form of non-parametric contrastive learning is widely used in general unsupervised visual representation learning methods (such as MoCo and SimCLR) and is not new in this method. It may meet with the current UDA benchmarks but the generality of this method based on such assumption is limited in those real-world practical application scenarios where no prior knowledge are available on target data. Existing methods which optimize source and target domains separately thus show more advantages in this aspect.


Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Neural Information Processing Systems

Three of the four reviewers originally recommended marginal accept or accept (7, 6, 6) as they felt the paper provided a good empirical contribution to the field of adaptive re-identification and its results were strong. R9 was more negative and had concerns around experiments. One reviewer pointed out that the DukeMTMC extensively used in the paper has been taken down 12 months ago and its use should be discontinued. Because of the ethical concerns around this, the paper underwent additional review by the ethics panel, which recommended that the dataset should NOT be used in an accepted NeurIPS paper. Some excerpts from the ethics reviewers are below: -- "... the dataset collection involved non-consensual video surveillance of students on Duke University campus. It is unlikely that all students even knew they were being recorded, and their relative lack of power with respect to the institution surveilling them also raises concerns about the ability to meaningfully object to the surveillance."


Hybrid Memory Replay: Blending Real and Distilled Data for Class Incremental Learning

Kong, Jiangtao, Shi, Jiacheng, Gao, Ashley, Hu, Shaohan, Zhou, Tianyi, Shao, Huajie

arXiv.org Artificial Intelligence

Incremental learning (IL) aims to acquire new knowledge from current tasks while retaining knowledge learned from previous tasks. Replay-based IL methods store a set of exemplars from previous tasks in a buffer and replay them when learning new tasks. However, there is usually a size-limited buffer that cannot store adequate real exemplars to retain the knowledge of previous tasks. In contrast, data distillation (DD) can reduce the exemplar buffer's size, by condensing a large real dataset into a much smaller set of more information-compact synthetic exemplars. Nevertheless, DD's performance gain on IL quickly vanishes as the number of synthetic exemplars grows. To overcome the weaknesses of real-data and synthetic-data buffers, we instead optimize a hybrid memory including both types of data. Specifically, we propose an innovative modification to DD that distills synthetic data from a sliding window of checkpoints in history (rather than checkpoints on multiple training trajectories). Conditioned on the synthetic data, we then optimize the selection of real exemplars to provide complementary improvement to the DD objective. The optimized hybrid memory combines the strengths of synthetic and real exemplars, effectively mitigating catastrophic forgetting in Class IL (CIL) when the buffer size for exemplars is limited. Notably, our method can be seamlessly integrated into most existing replay-based CIL models. Extensive experiments across multiple benchmarks demonstrate that our method significantly outperforms existing replay-based baselines.


Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Neural Information Processing Systems

Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances.