Distributed Zero-Order Optimization under Adversarial Noise
–Neural Information Processing Systems
We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors.
Neural Information Processing Systems
Oct-10-2024, 10:30:15 GMT
- Technology: