High-Probability Analysis of Online and Federated Zero-Order Optimisation
Akhavan, Arya, Janz, David, El-Mhamdi, El-Mahdi
We study distributed learning in the setting of gradient-free zero-order optimization and introduce FedZero, a federated zero-order algorithm that delivers sharp theoretical guarantees. Specifically, FedZero: (1) achieves near-optimal optimization error bounds with high probability in the federated convex setting; and (2) in the single-worker regime-where the problem reduces to the standard zero-order framework, establishes the first high-probability convergence guarantees for convex zero-order optimization, thereby strengthening the classical expectation-based results. At its core, FedZero employs a gradient estimator based on randomization over the $\ell_1$-sphere. To analyze it, we develop new concentration inequalities for Lipschitz functions under the uniform measure on the $\ell_1$-sphere, with explicit constants. These concentration tools are not only central to our high-probability guarantees but may also be of independent interest.
Sep-29-2025
- Country:
- Asia > Middle East
- Europe
- France (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.28)
- Genre:
- Research Report (0.40)
- Technology: