Communication Efficient, Differentially Private Distributed Optimization using Correlation-Aware Sketching
Nicolas, Julien, Maouche, Mohamed, Mokhtar, Sonia Ben, Coates, Mark
–arXiv.org Artificial Intelligence
Federated learning with differential privacy suffers from two major costs: each client must transmit $d$-dimensional gradients every round, and the magnitude of DP noise grows with $d$. Yet empirical studies show that gradient updates exhibit strong temporal correlations and lie in a $k$-dimensional subspace with $k \ll d$. Motivated by this, we introduce DOME, a decentralized DP optimization framework in which each client maintains a compact sketch to project gradients into $\mathbb{R}^k$ before privatization and Secure Aggregation. This reduces per-round communication from order $d$ to order $k$ and moves towards a gradient approximation mean-squared error of $σ^2 k$. To allow the sketch to span new directions and prevent it from collapsing onto historical gradients, we augment it with random probes orthogonal to historical directions. We prove that our overall protocol satisfies $(ε,δ)$-Differential Privacy.
arXiv.org Artificial Intelligence
Jul-8-2025
- Country:
- Africa > Middle East
- Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Europe > France (0.04)
- North America > Canada
- Africa > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: