Provable Defense against Backdoor Policies in Reinforcement Learning
–Neural Information Processing Systems
We propose a provable defense mechanism against backdoor policies in reinforcement learning under subspace trigger assumption. A backdoor policy is a security threat where an adversary publishes a seemingly well-behaved policy which in fact allows hidden triggers. During deployment, the adversary can modify observed states in a particular way to trigger unexpected actions and harm the agent. We assume the agent does not have the resources to re-train a good policy. Instead, our defense mechanism sanitizes the backdoor policy by projecting observed states to a safe subspace', estimated from a small number of interactions with a clean (non-triggered) environment.
Neural Information Processing Systems
Oct-11-2024, 06:46:38 GMT
- Industry:
- Technology: