Distributed Online Optimization with Stochastic Agent Availability

Achddou, Juliette, Cesa-Bianchi, Nicolò, Qiu, Hao

arXiv.org Artificial Intelligence 

Motivated by practical federated learning settings where clients may not be always available, In this work we focus on distributed online optimization we investigate a variant of distributed (DOO), an online learning variant of distributed online optimization where agents are active convex optimization in which each agent is facing an with a known probability p at each time adversarial sequence of convex loss functions (Hosseini step, and communication between neighboring et al., 2013). The goal of an agent is to minimize its agents can only take place if they are regret with respect to a sequence of global loss functions, both active. We introduce a distributed variant each obtained by summing the corresponding of the FTRL algorithm and analyze its local losses for each agent. In both batch and online network regret, defined through the average distributed optimization settings, the presence of of the instantaneous regret of the active the communication network, which limits the exchange agents. Our analysis shows that, for any of information to adjacent nodes, implies that agents connected communication graph G over N must use some information-propagation technique to agents, the expected network regret of our collect information about the global loss function.