Goto

Collaborating Authors

 decentralised non-parametric regression


Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Neural Information Processing Systems

We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent decentralised non-parametric regression with the square loss function when i.i.d.


Reviews: Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Neural Information Processing Systems

Update: I have read the author response and appreciate that they addressed some of my comments. The focus is on obtaining statistical guarantees about the generalization. This is a highly relevant direction to the growing body of work on decentralized training. The paper is generally well written, contains very original ideas, and I was very excited to read it. The main reason I didn't give a higher rating was because of the limitations listed at the beginning of Sec 5. I commend the authors for acknowledging them.


Reviews: Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Neural Information Processing Systems

This paper provides a nice and clean characterization of a decentralized learning problem. The result is perhaps unsurprising in its form, but the analysis is far from trivial. There are some nontrivial assumptions for their results to hold which perhaps limit the scope of this result but do suggest interesting avenues for future research in this increasingly important area. Overall, this is a solid contribution and should be of interest to NeurIPS attendees who work in optimization and distributed systems.


Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Neural Information Processing Systems

We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent decentralised non-parametric regression with the square loss function when i.i.d. We show that if agents hold sufficiently many samples with respect to the network size, then Distributed Gradient Descent achieves optimal statistical rates with a number of iterations that scales, up to a threshold, with the inverse of the spectral gap of the gossip matrix divided by the number of samples owned by each agent raised to a problem-dependent power. The presence of the threshold comes from statistics. It encodes the existence of a "big data" regime where the number of required iterations does not depend on the network topology. In this regime, Distributed Gradient Descent achieves optimal statistical rates with the same order of iterations as gradient descent run with all the samples in the network.


Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Richards, Dominic, Rebeschini, Patrick

Neural Information Processing Systems

We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent decentralised non-parametric regression with the square loss function when i.i.d. We show that if agents hold sufficiently many samples with respect to the network size, then Distributed Gradient Descent achieves optimal statistical rates with a number of iterations that scales, up to a threshold, with the inverse of the spectral gap of the gossip matrix divided by the number of samples owned by each agent raised to a problem-dependent power. The presence of the threshold comes from statistics. It encodes the existence of a "big data" regime where the number of required iterations does not depend on the network topology. In this regime, Distributed Gradient Descent achieves optimal statistical rates with the same order of iterations as gradient descent run with all the samples in the network.