QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning
Vono, Maxime, Plassier, Vincent, Durmus, Alain, Dieuleveut, Aymeric, Moulines, Eric
Federated learning aims at conducting inference when data are decentralised and locally stored on several clients, under two main constraints: data ownership and communication overhead. In this paper, we address these issues under the Bayesian paradigm. To this end, we propose a novel Markov chain Monte Carlo algorithm coined \texttt{QLSD} built upon quantised versions of stochastic gradient Langevin dynamics. To improve performance in a big data regime, we introduce variance-reduced alternatives of our methodology referred to as \texttt{QLSD}$^\star$ and \texttt{QLSD}$^{++}$. We provide both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms and illustrate their benefits on several federated learning benchmarks.
Jun-1-2021
- Country:
- North America > United States (0.46)
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology (0.67)
- Technology: