Goto

Collaborating Authors

 high-throughput synchronous deep rl


Review for NeurIPS paper: High-Throughput Synchronous Deep RL

Neural Information Processing Systems

The baselines are somehow weak. Though TorchBeast is a strong baseline, the PPO and A2C from Kostrikov seem weak. As far as I know, faster training is not the goal of Kostrikov's implementation. For PPO, the implementation from OpenAI baselines are stronger, which features parallelization with MPI and all-reduce gradients. For A2C, one could consider rlpyt (rlpyt: A Research Code Base for Deep Reinforcement Learning in PyTorch), where various sampling schemes (including batch synchronization) and optimization schemes can be used.


Review for NeurIPS paper: High-Throughput Synchronous Deep RL

Neural Information Processing Systems

This paper proposes a synchronous training scheme for reinforcement learning which address issues with existing synchronous methods (low throughput) and existing asynchronous methods (unstable, non-reproducible, etc.). The reviewers viewed this more of an engineering paper, but the design, execution, and experiments are solid, so we are recommending acceptance. I saw that the paper mentions that code will be released, but I want to emphasize the importance of this, as a large part of the value here is in enabling others to build on and use the proposed method.


High-Throughput Synchronous Deep RL

Neural Information Processing Systems

Various parallel actor-learner methods reduce long training times for deep reinforcement learning. Synchronous methods enjoy training stability while having lower data throughput. In contrast, asynchronous methods achieve high throughput but suffer from stability issues and lower sample efficiency due to'stale policies.' To combine the advantages of both methods we propose High-Throughput Synchronous Deep Reinforcement Learning (HTS-RL). In HTS-RL, we perform learning and rollouts concurrently, devise a system design which avoids'stale policies' and ensure that actors interact with environment replicas in an asynchronous manner while maintaining full determinism.