ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
Zhou, Tong, Ren, Shaolei, Xu, Xiaolin
–arXiv.org Artificial Intelligence
Malicious architecture extraction has been emerging as a crucial concern for deep neural network (DNN) security. As a defense, architecture obfuscation is proposed to remap the victim DNN to a different architecture. Nonetheless, we observe that, with only extracting an obfuscated DNN architecture, the adversary can still retrain a substitute model with high performance (e.g., accuracy), rendering the obfuscation techniques ineffective. To mitigate this under-explored vulnerability, we propose ObfuNAS, which converts the DNN architecture obfuscation into a neural architecture search (NAS) problem. Using a combination of function-preserving obfuscation strategies, ObfuNAS ensures that the obfuscated DNN architecture can only achieve lower accuracy than the victim. We validate the performance of ObfuNAS with open-source architecture datasets like NAS-Bench-101 and NAS-Bench-301. The experimental results demonstrate that ObfuNAS can successfully find the optimal mask for a victim model within a given FLOPs constraint, leading up to 2.6% inference accuracy degradation for attackers with only 0.14x FLOPs overhead. The code is available at: https://github.com/Tongzhou0101/ObfuNAS.
arXiv.org Artificial Intelligence
Aug-23-2022
- Country:
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California
- Riverside County > Riverside (0.04)
- San Diego County > San Diego (0.05)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York > New York County
- New York City (0.04)
- California
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (0.34)
- Technology: