td error
- North America > Canada > Ontario > Toronto (0.14)
- Africa > Mali (0.04)
- Asia > Middle East > Jordan (0.04)
- (3 more...)
- Research Report > New Finding (0.94)
- Research Report > Experimental Study (0.67)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Prioritizing Samples in Reinforcement Learning with Reducible Loss
Most reinforcement learning algorithms take advantage of an experience replay buffer to repeatedly train on samples the agent has observed in the past. Not all samples carry the same amount of significance and simply assigning equal importance to each of the samples is a naive strategy. In this paper, we propose a method to prioritize samples based on how much we can learn from a sample.
- North America > Canada > Quebec > Montreal (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- South America > Brazil > Pernambuco (0.04)
- (6 more...)
R-learninginactor-criticmodeloffersabiologically relevantmechanismforsequentialdecision-making
Afewstudies haveexplored sequential stay-or-leavedecisions in humans, or rodents - the model organism used to access neuronal activity at high resolution. In both cases, decision patterns were collected inforaging tasks-the experimental settings where subjects decide when to leave depleting resources (2).
- Europe > Sweden (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (6 more...)
Regret Minimization Experience Replay in Off-Policy Reinforcement Learning
In reinforcement learning, experience replay stores past samples for further reuse. Prioritized sampling is a promising technique to better utilize these samples. Previous criteria of prioritization include TD error, recentness and corrective feedback, which are mostly heuristically designed. In this work, we start from the regret minimization objective, and obtain an optimal prioritization strategy for Bellman update that can directly maximize the return of the policy. The theory suggests that data with higher hindsight TD error, better on-policiness and more accurate Q value should be assigned with higher weights during sampling. Thus most previous criteria only consider this strategy partially. We not only provide theoretical justifications for previous criteria, but also propose two new methods to compute the prioritization weight, namely ReMERN and ReMERT. ReMERN learns an error network, while ReMERT exploits the temporal ordering of states. Both methods outperform previous prioritized sampling algorithms in challenging RL benchmarks, including MuJoCo, Atari and Meta-World.
- North America > United States (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
- Information Technology > Data Science > Data Mining (0.93)
- North America > Canada > Alberta (0.14)
- Asia > China > Jiangsu Province > Nanjing (0.04)