Distributed Online Learning for Joint Regret with Communication Constraints

van der Hoeven, Dirk, Hadiji, Hédi, van Erven, Tim

arXiv.org Machine Learning 

We consider a decentralized online convex optimization (OCO) setting with multiple agents that share information across a network to improve the prediction quality of the network as a whole. Our motivation comes from cases where local computation is cheap, but communication is relatively expensive. This is the case, for instance, in sensor networks, where the energy cost of wireless communication is typically the main bottleneck, and long-distance communication requires much more energy than communication between nearby sensors (Rabbat, Nowak, 2004). It also applies to cases where communication is relatively slow compared to the volume of prediction requests that each agent must serve. For instance, in climate informatics communication may be slow because agents are geographically spread out (McQuade, Monteleoni, 2012, 2017), and in finance or online advertising the rate of prediction requests may be so high that communication is slow by comparison. To model such scenarios, we limit communication in two ways: first, agents can only directly communicate to their neighbors in a communication graph G and, second, the messages that the agents can send are limited to contain at most b bits. We further assume that learning is fully decentralized, so there is no central coordinating agent as in federated learning (Kairouz et al., 2019), and no single agent that dictates the predictions for all other agents as in distributed online optimization for consensus problems (Hosseini et al., 2013; Yan et al., 2013).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found