adaptive td
- North America > Canada (0.04)
- Asia > China (0.04)
- Asia > China (0.05)
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Finite-Time Analysis of Adaptive Temporal Difference Learning with Deep Neural Networks
Temporal difference (TD) learning with function approximations (linear functions or neural networks) has achieved remarkable empirical success, giving impetus to the development of finite-time analysis. As an accelerated version of TD, the adaptive TD has been proposed and proved to enjoy finite-time convergence under the linear function approximation. Existing numerical results have demonstrated the superiority of adaptive algorithms to vanilla ones. Nevertheless, the performance guarantee of adaptive TD with neural network approximation remains widely unknown. This paper establishes the finite-time analysis for the adaptive TD with multi-layer ReLU network approximation whose samples are generated from a Markov decision process. Our established theory shows that if the width of the deep neural network is large enough, the adaptive TD using neural network approximation can find the (optimal) value function with high probabilities under the same iteration complexity as TD in general cases. Furthermore, we show that the adaptive TD using neural network approximation, with the same width and searching area, can achieve theoretical acceleration when the stochastic semi-gradients decay fast.
- North America > Canada (0.04)
- Asia > China (0.04)
- Asia > China (0.05)
- North America > United States > Utah (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (3 more...)
Reviews: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates
The authors propose a novel method for adaptively using either the MC method for policy evaluation or the temporal difference method. The authors aim to solve the problem of balancing bias and variance in the reinforcement learning setting and to this end propose the Adaptive TD algorithm. The algorithm takes as input a set of sample episodes which it uses to bootstrap some confidence intervals for the value function of each state. It then compares the TD estimate for each of these states with these confidence intervals and keeps the TD estimate if it fits inside, otherwise, it picks the middle of the confidence interval as it assumes the TD estimate is essentially biased and inaccurate. The process repeats for a number of epochs (since the TD estimates change as the value function estimate for the future state is updated by the adaptive-TD rule). I think this paper shows promise: the method is, to my knowledge, original and from the numerical experiments seems to achieve the target the authors set for it - dominating TD and MC in the worst case.