High-Probability Analysis of Online and Federated Zero-Order Optimisation

2025-09-28 19:00 GMT · 6 months ago aimagpro.com

arXiv:2509.21484v1 Announce Type: new
Abstract: We study distributed learning in the setting of gradient-free zero-order optimization and introduce FedZero, a federated zero-order algorithm that delivers sharp theoretical guarantees. Specifically, FedZero: (1) achieves near-optimal optimization error bounds with high probability in the federated convex setting; and (2) in the single-worker regime-where the problem reduces to the standard zero-order framework, establishes the first high-probability convergence guarantees for convex zero-order optimization, thereby strengthening the classical expectation-based results. At its core, FedZero employs a gradient estimator based on randomization over the $ell_1$-sphere. To analyze it, we develop new concentration inequalities for Lipschitz functions under the uniform measure on the $ell_1$-sphere, with explicit constants. These concentration tools are not only central to our high-probability guarantees but may also be of independent interest.