Goto

Collaborating Authors

 td-learning



Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators

Neural Information Processing Systems

In TD-learning, off-policy sampling is known to be more practical than on-policy sampling, and by decoupling learning from data collection, it enables data reuse. It is known that policy evaluation has the interpretation of solving a generalized Bellman equation. In this paper, we derive finite-sample bounds for any general off-policy TD-like stochastic approximation algorithm that solves for the fixed-point of this generalized Bellman operator. Our key step is to show that the generalized Bellman operator is simultaneously a contraction mapping with respect to a weighted $\ell_p$-norm for each $p$ in $[1,\infty)$, with a common contraction factor. Off-policy TD-learning is known to suffer from high variance due to the product of importance sampling ratios.


Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators

Neural Information Processing Systems

It is known that policy evaluation has the interpretation of solving a generalized Bellman equation. In this paper, we derive finite-sample bounds for any general off-policy TD-like stochastic approximation algorithm that solves for the fixed-point of this generalized Bellman operator.


Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators

Neural Information Processing Systems

It is known that policy evaluation has the interpretation of solving a generalized Bellman equation. In this paper, we derive finite-sample bounds for any general off-policy TD-like stochastic approximation algorithm that solves for the fixed-point of this generalized Bellman operator.


Review 1

Neural Information Processing Systems

See the detailed reasons below. In RL, it is widely known that studying the average reward is a more challenging topic. On top of the above, when coupled with the multi-agent setting, the average reward case brings additional challenges. Specifically, as shown in Appendix A.2 in the paper, our average reward problem captures certain NP-hard instances. Similar complexity results can be found in [Blondel and Tsitsiklis 2000].




Compute-Optimal Scaling for Value-Based Deep RL

Fu, Preston, Rybkin, Oleh, Zhou, Zhiyuan, Nauman, Michal, Abbeel, Pieter, Levine, Sergey, Kumar, Aviral

arXiv.org Artificial Intelligence

As models grow larger and training them becomes expensive, it becomes increasingly important to scale training recipes not just to larger models and more data, but to do so in a compute-optimal manner that extracts maximal performance per unit of compute. While such scaling has been well studied for language modeling, reinforcement learning (RL) has received less attention in this regard. In this paper, we investigate compute scaling for online, value-based deep RL. These methods present two primary axes for compute allocation: model capacity and the update-to-data (UTD) ratio. Given a fixed compute budget, we ask: how should resources be partitioned across these axes to maximize sample efficiency? Our analysis reveals a nuanced interplay between model size, batch size, and UTD. In particular, we identify a phenomenon we call TD-overfitting: increasing the batch quickly harms Q-function accuracy for small models, but this effect is absent in large models, enabling effective use of large batch size at scale. We provide a mental model for understanding this phenomenon and build guidelines for choosing batch size and UTD to optimize compute usage. Our findings provide a grounded starting point for compute-optimal scaling in deep RL, mirroring studies in supervised learning but adapted to TD learning.



Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators

Neural Information Processing Systems

It is known that policy evaluation has the interpretation of solving a generalized Bellman equation. In this paper, we derive finite-sample bounds for any general off-policy TD-like stochastic approximation algorithm that solves for the fixed-point of this generalized Bellman operator.