Goto

Collaborating Authors

 proxy-anchor



ce016f59ecc2366a43e1c96a4774d167-Supplemental.pdf

Neural Information Processing Systems

Supplementary Material for "Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer Proxies" Figure 1: Recall@1 values of ProxyGML on Cars196 with different combinations of N and r . Please note that this is only a preliminary experiment. Now we consider two special cases. In this case, negative elements in the prediction scores will all be zeros ( cf .


ce016f59ecc2366a43e1c96a4774d167-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their valuable comments and recognition of the novelty and results of our method, e . We respond to the major comments below but will address all feedback in our revised version. Proxies are globally learnable "cluster centers" while Clustering [13] directly regards There are actually two types of constraints among proxies in our method, i . "soft" constraint, by encouraging proxies to be close to their anchor samples ( In practice, similar proxies tend to be sufficiently close to each other in the later training stage. Eq. (5)) proxies for each sample during back-propagation, and we use a small batch size As future work, we will focus more on addressing such datasets with huge inter-class variance.


Mixup-based Deep Metric Learning Approaches for Incomplete Supervision

Buris, Luiz H., Pedronette, Daniel C. G., Papa, Joao P., Almeida, Jurandy, Carneiro, Gustavo, Faria, Fabio A.

arXiv.org Artificial Intelligence

Deep learning architectures have achieved promising results in different areas (e.g., medicine, agriculture, and security). However, using those powerful techniques in many real applications becomes challenging due to the large labeled collections required during training. Several works have pursued solutions to overcome it by proposing strategies that can learn more for less, e.g., weakly and semi-supervised learning approaches. As these approaches do not usually address memorization and sensitivity to adversarial examples, this paper presents three deep metric learning approaches combined with Mixup for incomplete-supervision scenarios. We show that some state-of-the-art approaches in metric learning might not work well in such scenarios. Moreover, the proposed approaches outperform most of them in different datasets.