algorithmic framework
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Beijing > Beijing (0.04)
FedSplit: an algorithmic framework for fast federated optimization
Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice.
- Asia > China > Hong Kong (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada (0.04)
- Research Report > New Finding (0.41)
- Research Report > Experimental Study (0.41)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.48)
Anytime Planning for End-Effector Trajectory Tracking
Wang, Yeping, Gleicher, Michael
End-effector trajectory tracking algorithms find joint motions that drive robot manipulators to track reference trajectories. In practical scenarios, anytime algorithms are preferred for their ability to quickly generate initial motions and continuously refine them over time. In this paper, we present an algorithmic framework that adapts common graph-based trajectory tracking algorithms to be anytime and enhances their efficiency and effectiveness. Our key insight is to identify guide paths that approximately track the reference trajectory and strategically bias sampling toward the guide paths. We demonstrate the effectiveness of the proposed framework by restructuring two existing graph-based trajectory tracking algorithms and evaluating the updated algorithms in three experiments.
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
Review for NeurIPS paper: FedSplit: an algorithmic framework for fast federated optimization
Weaknesses: Main criticism: 1) The paper claims two main contributions, one of which is "The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points " I believe the text above is misleading. In fact, it was already well known for the "past procedures" to not have the correct fixed points; one alternative approach to deal with such an issue was to incorporate the "drift"; see https://arxiv.org/abs/1910.06378 for example. Therefore, believe it would be more appropriate to not claim the contribution for showing the wrong fixed point of the local algorithms. Specifically, in strongly convex case (same arguments apply for weakly convex), the communication complexity of FedSplit is O(sqrt(kappa)log(1/epsilon)), which is identical to the communication complexity of AGD. In fact, AGD is favorable (in terms of the rate) as is requires a single gradient evaluation instead of evaluating the prox with high enough precision so that the inexactness does not drive the rate.
FedSplit: an algorithmic framework for fast federated optimization
Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities.