NeuroDeX: Unlocking Diverse Support in Decompiling Deep Neural Network Executables

Li, Yilin, Meng, Guozhu, Sun, Mingyang, Wang, Yanzhong, Sun, Kun, Chang, Hailong, Li, Yuekang

arXiv.org Artificial Intelligence 

Abstract--On-device deep learning models have extensive real-world demands. Deep learning compilers efficiently compile models into executables for deployment on edge devices, but these executables may face the threat of reverse engineering. Previous studies have attempted to decompile DNN executables, but they face challenges in handling compilation optimizations and analyzing quantized compiled models. In this paper, we present NeuroDeX to unlock diverse support in decompiling DNN executables. NeuroDeX leverages the semantic understanding capabilities of LLMs along with dynamic analysis to accurately and efficiently perform operator type recognition, operator attribute recovery and model reconstruction. NeuroDeX can recover DNN executables into high-level models towards compilation optimizations, different architectures and quantized compiled models. We conduct experiments on 96 DNN executables across 12 common DNN models. Extensive experimental results demonstrate that NeuroDeX can decompile non-quantized executables into nearly identical high-level models. NeuroDeX can recover functionally similar high-level models for quantized executables, achieving an average top-1 accuracy of 72%. NeuroDeX offers a more comprehensive and effective solution compared to previous DNN executables decompilers. In recent years, deep learning (DL) has rapidly advanced in the real world. Deploying deep neural networks (DNNs) on edge devices can meet the real-time requirements of edge computing, enhance privacy protection and enable offline inference capabilities, making DNNs widely applicable in real-world scenarios. DL compilers, such as TVM [1] and GLOW [2], can compile high-level DNN models into executables for inference on edge devices. DNNs are composed of different neural network operators (e.g., Conv, Relu), and DL compilers compiles these operators into operator functions in executables. DL compilers optimize models during compilation to improve inference efficiency and reduce deployment environment dependencies, which provides a good solution for deploying models on edge devices [3]-[5]. In DNN executables, the operators and weights are compiled into incomprehensible machine code, thereby reducing the risk of model stealing attacks compared to white-box deployment. However, DNN executables may still pose security risks due to decompilation. This undermines the intellectual property of the model owners, especially for models trained on private data. Based on the recovered high-level models, attackers can perform white-box adversarial attacks and backdoor attacks, threatening the secure use of DNN executables.