A Hybrid Stochastic Gradient Tracking Method for Distributed Online Optimization Over Time-Varying Directed Networks
Shi, Xinli, Yuan, Xingxing, Zhu, Longkang, Wen, Guanghui
–arXiv.org Artificial Intelligence
It aims to solve a large-scale optimization problem by decomposing it into smaller, more tractable subproblems that can be solved iteratively and in parallel by a network of interconnected agents through communication. Most traditional works on distributed optimization focus on static problems, making them unsuitable for dynamic tasks arising in real-world applications, such as networked autonomous vehicles, smart grids, and online machine learning, among others [8]. Online optimization, which addresses time-varying cost functions, plays a vital role in solving dynamic problems in timely application fields [58, 29, 21, 3]. In many practical scenarios, such as machine learning with information streams [46], the objective functions of optimization problems change over time, making them inherently dynamic [49, 58]. Online learning has emerged as a powerful method for handling sequential decision-making tasks in dynamic contexts, enabling real-time operation while ensuring bounded performance loss in terms of regret [12]. Regret is the gap between the cumulative objective value achieved by the online algorithm and that of the optimal offline solution [19, 44]. In the literature, two types of regret are commonly considered, i.e., static and dynamic regret.
arXiv.org Artificial Intelligence
Aug-29-2025
- Country:
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.48)
- Technology: