A Hassle-free Algorithm for Private Learning in Practice: Don't Use Tree Aggregation, Use BLTs
McMahan, H. Brendan, Xu, Zheng, Zhang, Yanxiang
–arXiv.org Artificial Intelligence
The state-of-the-art for training on-device language models for mobile keyboard applications combines federated learning (FL) with differential privacy (DP) via the DP-Follow-the-Regularized-Leader (DP-FTRL) algorithm. Two variants of DP-FTRL are used in practice, tree aggregation and matrix factorization. However, tree aggregation suffers from significantly suboptimal privacy/utility tradeoffs, while matrix mechanisms require expensive optimization parameterized by hard-to-estimate-in-advance constants, and high runtime memory costs.This paper extends the recently introduced Buffered Linear Toeplitz (BLT) mechanism to multi-participation scenarios. Our BLT-DP-FTRL maintains the ease-of-use advantages of tree aggregation, while essentially matching matrix factorization in terms of utility and privacy. We evaluate BLT-DP-FTRL on the StackOverflow dataset, serving as a re-producible simulation benchmark, and across four on-device language model tasks in a production FL system. Our empirical results highlight the advantages of the BLT mechanism and elevate the practicality and effectiveness of DP in real-world scenarios.
arXiv.org Artificial Intelligence
Aug-16-2024
- Country:
- Asia > Indonesia (0.04)
- Europe
- Portugal (0.04)
- Spain (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- North America > United States
- Hawaii > Honolulu County > Honolulu (0.04)
- South America > Brazil (0.04)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (0.67)
- Technology: