Federated Gaussian Process: Convergence, Automatic Personalization and Multi-fidelity Modeling

Yue, Xubo, Kontar, Raed Al

arXiv.org Machine Learning 

The modern era of computing is gradually shifting from a centralized regime where data is stored in a centralized location, often a cloud or central server, to a decentralized paradigm that allows clients to collaboratively learn models while keeping their data stored locally (Kontar et al., 2021). This paradigm shift was set forth by the massive increase in compute resources at the edge device and is based on one simple idea: instead of learning models on a central server, edge devices execute small computations locally and only share the minimum information needed to learn a model. This modern paradigm is often coined as federated learning (FL). Though the prototypical idea of FL dates back decades ago, to the early work of Mangasarian and Solodov (1994), it was only brought to the forefront of deep learning after the seminal paper by McMahan et al. (2017). In their work, McMahan et al. (2017) propose Federated Averaging (FedAvg) for decentralized learning of a deep learning model. In FedAvg, a central server broadcasts the network architecture and a global model (e.g., initial weights) to select clients, clients perform local computations (using stochastic