Time-Constrained Intelligent Adversaries for Automation Vulnerability Testing: A Multi-Robot Patrol Case Study
Ward, James C., Bott, Alex, York, Connor, Hunt, Edmund R.
–arXiv.org Artificial Intelligence
Abstract-- Simulating hostile attacks of physical autonomous systems can be a useful tool to examine their robustness to attack and inform vulnerability-aware design. In this work, we examine this through the lens of multi-robot patrol, by presenting a machine learning-based adversary model that observes robot patrol behavior in order to attempt to gain undetected access to a secure environment within a limited time duration. Such a model allows for evaluation of a patrol system against a realistic potential adversary, offering insight into future patrol strategy design. We show that our new model outperforms existing baselines, thus providing a more stringent test, and examine its performance against multiple leading decentralized multi-robot patrol strategies. Security in automated and robotic systems is of increasing importance as these systems becomes more pervasive and integrated throughout society. Beyond the obvious considerations of cybersecurity and communication security, an important facet of this is physical security -- the robustness of these systems to interference in the real world from a hostile actor.
arXiv.org Artificial Intelligence
Sep-16-2025
- Country:
- Europe
- Estonia (0.04)
- France > Île-de-France
- Italy > Emilia-Romagna
- Metropolitan City of Bologna > Bologna (0.04)
- Portugal (0.05)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Massachusetts (0.04)
- New York (0.04)
- Canada > Quebec
- Europe
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: