Characterizing the Accuracy-Communication-Privacy Trade-off in Distributed Stochastic Convex Optimization
Salgia, Sudeep, Pavlovic, Nikola, Chi, Yuejie, Zhao, Qing
Here x X denotes the decision variable where X is a convex, compact set and l(x;z) denotes the loss at point x using the datum z. We study this problem under the additional constraint of ensuring differential privacy [Dwork et al., 2006] of the local datasets at each client. This problem arises in numerous settings and represents a typical scenario for Federated Learning (FL) [McMahan et al., 2017], which has emerged as the de facto approach for collaboratively training machine learning models using a large number of devices coordinated through a central server [Kairouz et al., 2021, Wang et al., 2021]. Designing efficient algorithms for differentially private distributed stochastic convex optimization, also referred to as distributed DP-SCO, requires striking a careful balance between the primary objective of minimizing the optimization error and two competing desiderata -- communication cost and privacy.
Jan-6-2025
- Country:
- Asia (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: