Goto

Collaborating Authors

 Tanner, Brian


Evaluating Agents using Social Choice Theory

arXiv.org Artificial Intelligence

We argue that many general evaluation problems can be viewed through the lens of voting theory. Each task is interpreted as a separate voter, which requires only ordinal rankings or pairwise comparisons of agents to produce an overall evaluation. By viewing the aggregator as a social welfare function, we are able to leverage centuries of research in social choice theory to derive principled evaluation frameworks with axiomatic foundations. These evaluations are interpretable and flexible, while avoiding many of the problems currently facing cross-task evaluation. We apply this Voting-as-Evaluation (VasE) framework across multiple settings, including reinforcement learning, large language models, and humans. In practice, we observe that VasE can be more robust than popular evaluation frameworks (Elo and Nash averaging), discovers properties in the evaluation data not evident from scores alone, and can predict outcomes better than Elo in a complex seven-player game. We identify one particular approach, maximal lotteries, that satisfies important consistency properties relevant to evaluation, is computationally efficient (polynomial in the size of the evaluation data), and identifies game-theoretic cycles.


Reward-Respecting Subtasks for Model-Based Reinforcement Learning

arXiv.org Artificial Intelligence

To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress in state abstraction, but, although the theory of time abstraction has been extensively developed based on the options framework, in practice options have rarely been used in planning. One reason for this is that the space of possible options is immense and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks such as reaching a bottleneck state, or maximizing a sensory signal other than the reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. The subtasks proposed in most previous work ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option stops. We show that options and option models obtained from such reward-respecting subtasks are much more likely to be useful in planning and can be learned online and off-policy using existing learning algorithms. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how the algorithms for learning values, policies, options, and models can be unified using general value functions.



Report on the 2008 Reinforcement Learning Competition

AI Magazine

This article reports on the 2008 Reinforcement Learning Competition,  which began in November 2007 and ended with a workshop at the  International Conference on Machine Learning (ICML) in July 2008 in  Helsinki, Finland.  Researchers from around the world developed  reinforcement learning agents to compete in six problems of various  complexity and difficulty.  The competition employed fundamentally  redesigned evaluation frameworks that, unlike those in previous  competitions, aimed to systematically encourage the submission of  robust learning methods. We describe the unique challenges of  empirical evaluation in reinforcement learning and briefly review  the history of the previous competitions and the evaluation  frameworks they employed.  We also describe the novel frameworks  developed for the 2008 competition as well as the software  infrastructure on which they rely.  Furthermore, we describe the six  competition domains and present a summary of selected competition  results.  Finally, we discuss the implications of these results and  outline ideas for the future of the competition.