snull
- North America > United States > California (0.14)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- North America > United States > California (0.14)
- North America > Canada (0.04)
- North America > United States > California (0.14)
- North America > Canada (0.04)
- North America > United States > California (0.14)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- North America > United States > California (0.14)
- North America > Canada (0.04)
- North America > United States > California (0.14)
- North America > Canada (0.04)
No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions
Jin, Tiancheng, Liu, Junyan, Rouyer, Chloé, Chang, William, Wei, Chen-Yu, Luo, Haipeng
Existing online learning algorithms for adversarial Markov Decision Processes achieve ${O}(\sqrt{T})$ regret after $T$ rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys $\widetilde{{O}}(\sqrt{T} + C^{\textsf{P}})$ regret where $C^{\textsf{P}}$ measures how adversarial the transition functions are and can be at most ${O}(T)$. While this algorithm itself requires knowledge of $C^{\textsf{P}}$, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in Jin et al. [2021]) and achieves $\widetilde{{O}}(U + \sqrt{UC^{\textsf{L}}} + C^{\textsf{P}})$ regret, where $U$ is some standard gap-dependent coefficient and $C^{\textsf{L}}$ is the amount of corruption on losses.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Research Report (0.64)
- Instructional Material > Online (0.40)
Variance Reduced Advantage Estimation with $\delta$ Hindsight Credit Assignment
Hindsight Credit Assignment (HCA) refers to a recently proposed family of methods for producing more efficient credit assignment in reinforcement learning. These methods work by explicitly estimating the probability that certain actions were taken in the past given present information. Prior work has studied the properties of such methods and demonstrated their behaviour empirically. We extend this work by introducing a particular HCA algorithm which has provably lower variance than the conventional Monte-Carlo estimator when the necessary functions can be estimated exactly. This result provides a strong theoretical basis for how HCA could be broadly useful.
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)