Exploring Adversarial Robustness of Deep Metric Learning

Panum, Thomas Kobber, Wang, Zi, Kan, Pengyu, Fernandes, Earlence, Jha, Somesh

arXiv.org Artificial Intelligence 

Deep Metric Learning (DML), a widely-used technique, involves learning a distance metric between Traditional deep learning classifiers are vulnerable to adversarial pairs of samples. DML uses deep neural examples (Szegedy et al., 2014; Biggio et al., architectures to learn semantic embeddings 2013) -- inconspicuous input changes that can cause the of the input, where the distance between similar model to output attacker-desired values. Few studies have examples is small while dissimilar ones are far addressed whether DML models are similarly susceptible apart. Although the underlying neural networks towards these attacks, and the results are contradictory produce good accuracy on naturally occurring (Abdelnabi et al., 2020; Panum et al., 2020). Given samples, they are vulnerable to adversariallyperturbed the wide usage of DML models in diverse ML tasks, including samples that reduce performance. We security-oriented ones, it is important to clarify take a first step towards training robust DML their susceptibility towards attacks and ultimately address models and tackle the primary challenge of the their lack of robustness. We investigate the vulnerability of metric losses being dependent on the samples DML towards these attacks and address the open problem in a mini-batch, unlike standard losses that only of training DML models using robust optimization techniques depend on the specific input-output pair.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found