Backdoor Attacks on Deep Learning Face Detection
Roux, Quentin Le, Teglia, Yannick, Furon, Teddy, Loubet-Moundi, Philippe
–arXiv.org Artificial Intelligence
--Face Recognition Systems that operate in unconstrained environments capture images under varying conditions, such as inconsistent lighting, or diverse face poses. These challenges require including a Face Detection module that regresses bounding boxes and landmark coordinates for proper Face Alignment. This paper shows the effectiveness of Object Generation Attacks on Face Detection, dubbed Face Generation Attacks, and demonstrates for the first time a Landmark Shift Attack that backdoors the coordinate regression task performed by face detectors. We then offer mitigations against these vulnerabilities. Deep Neural Networks (DNNs) have considerably influenced both academic research and a wide range of industries. The rapid growth in computational power and dataset availability leads to large-scale Machine Learning applications, such as anomaly detection in server farms and power plants [1], [2]. This technological change has also transformed Face Recognition, with modern Face Recognition Systems (FRSs) increasingly leveraging DNNs, e.g., to secure access to sensitive facilities [3]. Developing Machine Learning pipelines requires a costly combination of domain expertise, computational resources, and data access. The first casualty of these rising Machine Learning demands is often security.
arXiv.org Artificial Intelligence
Aug-4-2025
- Country:
- Asia > Middle East
- Israel (0.04)
- Europe > France
- Brittany > Ille-et-Vilaine > Rennes (0.04)
- North America > United States (0.04)
- Asia > Middle East
- Genre:
- Overview (0.46)
- Research Report (0.65)
- Industry:
- Energy (0.88)
- Information Technology > Security & Privacy (1.00)
- Technology: