Online Learning with Adversaries: A Differential-Inclusion Analysis
Ganesh, Swetha, Reiffers-Masson, Alexandre, Thoppe, Gugan
–arXiv.org Artificial Intelligence
We introduce an observation-matrix-based framework for fully asynchronous online Federated Learning (FL) with adversaries. In this work, we demonstrate its effectiveness in estimating the mean of a random vector. Our main result is that the proposed algorithm almost surely converges to the desired mean $\mu.$ This makes ours the first asynchronous FL method to have an a.s. convergence guarantee in the presence of adversaries. We derive this convergence using a novel differential-inclusion-based two-timescale analysis. Two other highlights of our proof include (a) the use of a novel Lyapunov function to show that $\mu$ is the unique global attractor for our algorithm's limiting dynamics, and (b) the use of martingale and stopping-time theory to show that our algorithm's iterates are almost surely bounded.
arXiv.org Artificial Intelligence
Sep-26-2023
- Country:
- Europe (0.28)
- Genre:
- Research Report (0.50)
- Industry:
- Education > Educational Setting
- Online (0.40)
- Information Technology > Security & Privacy (0.68)
- Education > Educational Setting