Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors
Maesumi, Arman, Zhu, Mingkang, Wang, Yi, Chen, Tianlong, Wang, Zhangyang, Bajaj, Chandrajit
–arXiv.org Artificial Intelligence
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes. We sample triangular faces on a reference human mesh, and create an adversarial texture atlas over those faces. The adversarial texture is transferred to human meshes in various poses, which are rendered onto a collection of real-world background images. Contrary to the traditional patch-based adversarial attacks, where prior work attempts to fool trained object detectors using appended adversarial patches, this new form of attack is mapped into the 3D object world and back-propagated to the texture atlas through differentiable rendering. As such, the adversarial patch is trained under deformation consistent with real-world materials. In addition, and unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views, potentially leading to an attacking scheme that is persistently strong in the physical world.
arXiv.org Artificial Intelligence
Apr-22-2021
- Country:
- North America > United States > Texas (0.15)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government > Military (0.89)
- Information Technology > Security & Privacy (0.89)
- Technology: