Goto

Collaborating Authors

 multitask reinforcement learning


Distributed Multitask Reinforcement Learning with Quadratic Convergence

Neural Information Processing Systems

Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large. The main reason behind this drawback is the reliance on centeralised solutions. Recent methods exploited the connection between MTRL and general consensus to propose scalable solutions. These methods, however, suffer from two drawbacks. First, they rely on predefined objectives, and, second, exhibit linear convergence guarantees. In this paper, we improve over state-of-the-art by deriving multitask reinforcement learning from a variational inference perspective. We then propose a novel distributed solver for MTRL with quadratic convergence guarantees.



Reviews: Distributed Multitask Reinforcement Learning with Quadratic Convergence

Neural Information Processing Systems

In this paper, the authors studied the problem of multitask reinforcement learning (MTRL), and propose several optimization techniques to alleviate the scalability issues observed in other methods, especially when the number of tasks or trajectories is large. Specifically, they rely on consensus algorithms to scale up MTRL algorithms and avoid the issues that exist in centralized solution methods. Furthermore, they show how MTRL algorithms can be improved over state-of-the-art benchmarks by considering the problem from a variational inference perspective, and then propose a novel distributed solver for MTRL with quadratic convergence guarantees. In general, this work is tackling some important problems in the increasingly popular domain of multi-task RL. Using the variational perspective of RL, the problem of MTRL can be cast as a variational inference problem, and policy search can be done through the minimization of the ELBO loss. To alternate the updates on variational parameters and the policy parameters, the authors also propose using EM based approaches, which is very reasonable.


Distributed Multitask Reinforcement Learning with Quadratic Convergence

Tutunov, Rasul, Kim, Dongho, Ammar, Haitham Bou

Neural Information Processing Systems

Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large. The main reason behind this drawback is the reliance on centeralised solutions. Recent methods exploited the connection between MTRL and general consensus to propose scalable solutions. These methods, however, suffer from two drawbacks. First, they rely on predefined objectives, and, second, exhibit linear convergence guarantees.


Distributed Multitask Reinforcement Learning with Quadratic Convergence

Tutunov, Rasul, Kim, Dongho, Ammar, Haitham Bou

Neural Information Processing Systems

Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large. The main reason behind this drawback is the reliance on centeralised solutions. Recent methods exploited the connection between MTRL and general consensus to propose scalable solutions. These methods, however, suffer from two drawbacks. First, they rely on predefined objectives, and, second, exhibit linear convergence guarantees. In this paper, we improve over state-of-the-art by deriving multitask reinforcement learning from a variational inference perspective. We then propose a novel distributed solver for MTRL with quadratic convergence guarantees.


Distributed Multitask Reinforcement Learning with Quadratic Convergence

Tutunov, Rasul, Kim, Dongho, Ammar, Haitham Bou

Neural Information Processing Systems

Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large. The main reason behind this drawback is the reliance on centeralised solutions. Recent methods exploited the connection between MTRL and general consensus to propose scalable solutions. These methods, however, suffer from two drawbacks. First, they rely on predefined objectives, and, second, exhibit linear convergence guarantees. In this paper, we improve over state-of-the-art by deriving multitask reinforcement learning from a variational inference perspective. We then propose a novel distributed solver for MTRL with quadratic convergence guarantees.