FoMo Rewards: Can we cast foundation models as reward functions?
Lubana, Ekdeep Singh, Brehmer, Johann, de Haan, Pim, Cohen, Taco
–arXiv.org Artificial Intelligence
We explore the viability of casting foundation models as generic reward functions for reinforcement learning. To this end, we propose a simple pipeline that interfaces an off-the-shelf vision model with a large language model. Specifically, given a trajectory of observations, we infer the likelihood of an instruction describing the task that the user wants an agent to perform. We show that this generic likelihood function exhibits the characteristics ideally expected from a reward function: it associates high values with the desired behaviour and lower values for several similar, but incorrect policies. Overall, our work opens the possibility of designing open-ended agents for interactive tasks via foundation models.
arXiv.org Artificial Intelligence
Dec-6-2023
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.93)
- Reinforcement Learning (1.00)
- Natural Language > Large Language Model (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence