A Geometric Perspective on Optimal Representations for Reinforcement Learning
Bellemare, Marc G., Dabney, Will, Dadashi, Robert, Taiga, Adrien Ali, Castro, Pablo Samuel, Roux, Nicolas Le, Schuurmans, Dale, Lattimore, Tor, Lyle, Clare
This paper proposes a new approach to representation learning based on geometric properties of the space of value functions. We study a two-part approximation of the value function: a nonlinear map from states to vectors, or representation, followed by a linear map from vectors to values. Our formulation considers adapting the representation to minimize the (linear) approximation of the value function of all stationary policies for a given environment. We show that this optimization reduces to making accurate predictions regarding a special class of value functions which we call adversarial value functions (AVFs). We argue that these AVFs make excellent auxiliary tasks, and use them to construct a loss which can be efficiently minimized to find a near-optimal representation for reinforcement learning. We highlight characteristics of the method in a series of experiments on the four-room domain.
Jan-31-2019
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > Canada
- Alberta (0.14)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Leisure & Entertainment > Games (0.46)
- Technology: