Asynchronous Bayesian Learning over a Network

Bhar, Kinjal, Bai, He, George, Jemin, Busart, Carl

arXiv.org Artificial Intelligence 

Often the data that a model needs to be trained on is distributed among multiple computing agents and it cannot be accrued in a single server location because of logistical constraints such as memory, efficient data sharing means, or confidentiality requirements due to sensitive nature of the data. However, the need arises to train the same model with the entire distributed data. Isolated training individually by the agents with their local data may lead to overfitted models as the training data is limited. Besides, training such isolated models on different agents is redundant as more parameter updates have to be performed by the isolated models to reach a certain level of accuracy as compared to what can be achieved by sharing information. Distributed learning aims to leverage the full distributed data by a coordinated training among all the agents where the agents are allowed to share partial information (usually the learned model parameters or their gradients) without sharing any raw data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found