Making RL tractable by learning more informative reward functions: example-based control, meta-learning, and normalized maximum likelihood

AIHub 

After the user provides a few examples of desired outcomes, MURAL automatically infers a reward function that takes into account these examples and the agent's uncertainty for each state. Although reinforcement learning has shown success in domains such as robotics, chip placement and playing video games, it is usually intractable in its most general form. In particular, deciding when and how to visit new states in the hopes of learning more about the environment can be challenging, especially when the reward signal is uninformative. These questions of reward specification and exploration are closely connected -- the more directed and "well shaped" a reward function is, the easier the problem of exploration becomes. The answer to the question of how to explore most effectively is likely to be closely informed by the particular choice of how we specify rewards.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found