SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
–Neural Information Processing Systems
Inference using deep neural networks is often outsourced to the cloud since it is a computationally demanding task. However, this raises a fundamental issue of trust. How can a client be sure that the cloud has performed inference correctly? A lazy cloud provider might use a simpler but less accurate model to reduce its own computational load, or worse, maliciously modify the inference results sent to the client. We propose SafetyNets, a framework that enables an untrusted server (the cloud) to provide a client with a short mathematical proof of the correctness of inference tasks that they perform on behalf of the client. Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits. Our empirical results on three-and four-layer deep neural networks demonstrate the run-time costs of SafetyNets for both the client and server are low.
Neural Information Processing Systems
Mar-17-2026, 14:36:58 GMT
- Technology: