Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot Navigation

Malone, Connor, Claxton, Owen, Shames, Iman, Milford, Michael

arXiv.org Artificial Intelligence 

-- Stand-alone Visual Place Recognition (VPR) systems have little defence against a well-designed adversarial attack, which can lead to disastrous consequences when deployed for robot navigation. We then propose how to close the loop between VPR, an Adversarial Attack Detector (AAD), and active navigation decisions by demonstrating the performance benefit of simulated AADs in a novel experiment paradigm - which we detail for the robotics community to use as a system framework. In the proposed experiment paradigm, we see the addition of AADs across a range of detection accuracies can improve performance over baseline; demonstrating a significant improvement - such as a 50% reduction in the mean along-track localization error - can be achieved with True Positive and False Positive detection rates of only 75% and up to 25% respectively. We examine a variety of metrics including: Along-Track Error, Percentage of Time Attacked, Percentage of Time in an'Unsafe' State, and Longest Continuous Time Under Attack. Expanding further on these results, we provide the first investigation into the efficacy of the Fast Gradient Sign Method (FGSM) adversarial attack for VPR. The analysis in this work highlights the need for AADs in real-world systems for trustworthy navigation, and informs quantitative requirements for system design. Although the impact of adversity in Visual Place Recognition (VPR) is widely understood, with state-of-the-art models offering increasing levels of robustness [1]-[4], the effects of adversarial attacks remain under-explored. Adversarial attacks generally refer to perturbations made to signals or input data by adversaries, with the goal of forcing the output of a system to be incorrect [5]. There has been a significant amount of work researching their effects on perception tasks such as image classification and object detection [5]-[9], yet they have not been widely investigated in the context of VPR. Adversarial attacks on perception systems vary depending on the level of access and information available to an attacker, including digital, physical-world, subtle, or overt attacks [5].