Directed-MAML: Meta Reinforcement Learning Algorithm with Task-directed Approximation
Zhang, Yang, Yan, Huiwen, Liu, Mushuang
–arXiv.org Artificial Intelligence
Model-Agnostic Meta-Learning (MAML) is a versatile meta-learning framework applicable to both supervised learning and reinforcement learning (RL). However, applying MAML to meta-reinforcement learning (meta-RL) presents notable challenges. First, MAML relies on second-order gradient computations, leading to significant computational and memory overhead. Second, the nested structure of optimization increases the problem's complexity, making convergence to a global optimum more challenging. To overcome these limitations, we propose Directed-MAML, a novel task-directed meta-RL algorithm. Before the second-order gradient step, Directed-MAML applies an additional first-order task-directed approximation to estimate the effect of second-order gradients, thereby accelerating convergence to the optimum and reducing computational cost. Experimental results demonstrate that Directed-MAML surpasses MAML-based baselines in computational efficiency and convergence speed in the scenarios of CartPole-v1, LunarLander-v2 and two-vehicle intersection crossing. Furthermore, we show that task-directed approximation can be effectively integrated into other meta-learning algorithms, such as First-Order Model-Agnostic Meta-Learning (FOMAML) and Meta Stochastic Gradient Descent(Meta-SGD), yielding improved computational efficiency and convergence speed.
arXiv.org Artificial Intelligence
Oct-2-2025
- Country:
- North America > United States
- Massachusetts (0.04)
- Missouri > Boone County
- Columbia (0.14)
- Virginia (0.04)
- North America > United States
- Genre:
- Research Report > New Finding (0.34)
- Technology: