Goto

Collaborating Authors

The Reinforcement Learning Competitions

AI Magazine

In these events, researchers from around the world developed reinforcement learning agents to compete in domains of various complexity and difficulty. We focus on the 2008 competition, which employed fundamentally redesigned evaluation frameworks that aimed systematically to encourage the submission of robust learning methods. We describe the unique challenges of empirical evaluation in reinforcement learning and briefly review the history of the previous competitions and the evaluation frameworks they employed. We describe the novel frameworks developed for the 2008 competition as well as the software infrastructure on which they rely. Furthermore, we describe the six competition domains, present selected competition results, and discuss the implications of these results.


Report on the 2008 Reinforcement Learning Competition

AI Magazine

This article reports on the 2008 Reinforcement Learning Competition,  which began in November 2007 and ended with a workshop at the  International Conference on Machine Learning (ICML) in July 2008 in  Helsinki, Finland.  Researchers from around the world developed  reinforcement learning agents to compete in six problems of various  complexity and difficulty.  The competition employed fundamentally  redesigned evaluation frameworks that, unlike those in previous  competitions, aimed to systematically encourage the submission of  robust learning methods. We describe the unique challenges of  empirical evaluation in reinforcement learning and briefly review  the history of the previous competitions and the evaluation  frameworks they employed.  We also describe the novel frameworks  developed for the 2008 competition as well as the software  infrastructure on which they rely.  Furthermore, we describe the six  competition domains and present a summary of selected competition  results.  Finally, we discuss the implications of these results and  outline ideas for the future of the competition.


The Reinforcement Learning Competition 2014

AI Magazine

Reinforcement learning is one of the most general problems in artificial intelligence. It has been used to model problems in automated experiment design, control, economics, game playing, scheduling and telecommunications. The aim of the reinforcement learning competition is to encourage the development of very general learning agents for arbitrary reinforcement learning problems and to provide a test-bed for the unbiased evaluation of algorithms.


The Reinforcement Learning Competition 2014

AI Magazine

It has been used to model problems in automated experiment design, control, economics, game playing, scheduling, and telecommunications. The aim of the reinforcement learning competition is to encourage the development of very general learning agents for arbitrary reinforcement learning problems and to provide a test bed for the unbiased evaluation of algorithms. An agent takes actions in an unknown environment, observes their effects, and obtains rewards. The agent's aim is to learn how the environment works in order to maximize the total reward obtained during its lifetime. RL problems are quite general.


Towards robust and domain agnostic reinforcement learning competitions

arXiv.org Machine Learning

Reinforcement learning competitions have formed the basis for standard research benchmarks, galvanized advances in the state-of-the-art, and shaped the direction of the field. Despite this, a majority of challenges suffer from the same fundamental problems: participant solutions to the posed challenge are usually domain-specific, biased to maximally exploit compute resources, and not guaranteed to be reproducible. In this paper, we present a new framework of competition design that promotes the development of algorithms that overcome these barriers. We propose four central mechanisms for achieving this end: submission retraining, domain randomization, desemantization through domain obfuscation, and the limitation of competition compute and environment-sample budget. To demonstrate the efficacy of this design, we proposed, organized, and ran the MineRL 2020 Competition on Sample-Efficient Reinforcement Learning. In this work, we describe the organizational outcomes of the competition and show that the resulting participant submissions are reproducible, non-specific to the competition environment, and sample/resource efficient, despite the difficult competition task.