anderson acceleration
- Asia > China > Beijing > Beijing (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Quebec (0.04)
- North America > Canada > British Columbia (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- (2 more...)
- North America > Canada > Quebec (0.04)
- North America > Canada > British Columbia (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- Information Technology > Mathematics of Computing (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
FedOSAA: Improving Federated Learning with One-Step Anderson Acceleration
Feng, Xue, Laiu, M. Paul, Strohmer, Thomas
Federated learning (FL) is a distributed machine learning approach that enables multiple local clients and a central server to collaboratively train a model while keeping the data on their own devices. First-order methods, particularly those incorporating variance reduction techniques, are the most widely used FL algorithms due to their simple implementation and stable performance. However, these methods tend to be slow and require a large number of communication rounds to reach the global minimizer. We propose FedOSAA, a novel approach that preserves the simplicity of first-order methods while achieving the rapid convergence typically associated with second-order methods. Our approach applies one Anderson acceleration (AA) step following classical local updates based on first-order methods with variance reduction, such as FedSVRG and SCAFFOLD, during local training. This AA step is able to leverage curvature information from the history points and gives a new update that approximates the Newton-GMRES direction, thereby significantly improving the convergence. We establish a local linear convergence rate to the global minimizer of FedOSAA for smooth and strongly convex loss functions. Numerical comparisons show that FedOSAA substantially improves the communication and computation efficiency of the original first-order methods, achieving performance comparable to second-order methods like GIANT.
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy (0.68)
Reviews: Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning
The main contribution of this paper is to apply Anderson acceleration to the setting of deep reinforcement learning. The authors first propose a regularized form of Anderson acceleration, and then show how it can be applied to two practical deep RL algorithms: DQN and TD3. Originality: This paper falls under the vein of applying existing techniques to a novel domain. While the idea of introducing Anderson acceleration to the context of RL is not new, as the authors mention, it has not been applied to deep RL methods. While the originality is somewhat limited in this aspect, developing a practical and functional improvement for deep RL algorithms is not trivial.