No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips
Galil, Ido, Kimhi, Moshe, El-Yaniv, Ran
–arXiv.org Artificial Intelligence
Deep neural networks (DNNs) power a wide range of applications, including safety-critical tasks such as autonomous driving, unmanned aerial vehicle (UAV) navigation, medical diagnostics, and robotics, where real-time decision-making is essential. However, the increasing reliance on DNNs also raises concerns about their resilience to malicious attacks. Ensuring the robustness of DNNs is crucial to maintaining their reliability in such critical applications. In this paper, we expose a critical vulnerability in DNNs that allows for severe disruption by flipping as few as one to ten sign bits, a tiny fraction of the model's parameters. Our method demonstrates how a small number of bit flips, within models containing up to hundred millions of parameters, can cause catastrophic degradation in performance. We systematically analyze and identify the parameters most susceptible to sign flips, which we term "critical parameters."
arXiv.org Artificial Intelligence
Feb-11-2025
- Country:
- North America > United States > Massachusetts (0.14)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: