O$^2$TD: (Near)-Optimal Off-Policy TD Learning
Liu, Bo, Lyu, Daoming, Dong, Wen, Biaz, Saad
Temporal difference learning and Residual Gradient methods are the most widely used temporal difference based learning algorithms; however, it has been shown that none of their objective functions is optimal w.r.t approximating the true value function V. Two novel algorithms are proposed to approximate the true value function V. This paper makes the following contributions: - A batch algorithm that can help find the approximate optimal off-policy prediction of the true value function V. - A linear computational cost (per step) near-optimal algorithm that can learn from a collection of off-policy samples.
Apr-19-2017
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Massachusetts > Middlesex County
- Belmont (0.04)
- California > San Francisco County
- Asia > Middle East
- Genre:
- Research Report (0.65)
- Technology: