Goto

Collaborating Authors

 Booth, Joe


Realistic Physics Based Character Controller

arXiv.org Artificial Intelligence

Over the course of the last several years there was a strong interest in application of modern optimal control techniques to the field of character animation. This interest was fueled by introduction of efficient learning based algorithms for policy optimization, growth in computation power, and game engine improvements. It was shown that it is possible to generate natural looking control of a character by using two ingredients. First, the simulated agent must adhere to a motion capture dataset. And second, the character aims to track the control input from the user. The paper aims at closing the gap between the researchers and users by introducing an open source implementation of physics based character control in Unity framework that has a low entry barrier and a steep learning curve.


PPO Dash: Improving Generalization in Deep Reinforcement Learning

arXiv.org Artificial Intelligence

Deep reinforcement learning is prone to overfitting, and traditional benchmarks such as Atari 2600 benchmark can exacerbate this problem. The Obstacle Tower Challenge addresses this by using randomized environments and separate seeds for training, validation, and test runs. This paper examines various improvements and best practices to the PPO algorithm using the Obstacle Tower Challenge to empirically study their impact with regards to generalization. Our experiments show that the combination provides state-of-the-art performance on the Obstacle Tower Challenge.


Marathon Environments: Multi-Agent Continuous Control Benchmarks in a Modern Video Game Engine

arXiv.org Artificial Intelligence

Recent advances in deep reinforcement learning in the paradigm of locomotion using continuous control have raised the interest of game makers for the potential of digital actors using active ragdoll. Currently, the available options to develop these ideas are either researchers' limited codebase or proprietary closed systems. We present Marathon Environments, a suite of open source, continuous control benchmarks implemented on the Unity game engine, using the Unity ML- Agents Toolkit. We demonstrate through these benchmarks that continuous control research is transferable to a commercial game engine. Furthermore, we exhibit the robustness of these environments by reproducing advanced continuous control research, such as learning to walk, run and backflip from motion capture data; learning to navigate complex terrains; and by implementing a video game input control system. We show further robustness by training with alternative algorithms found in OpenAI.Baselines. Finally, we share strategies for significantly reducing the training time.