Optimistic Task Inference for Behavior Foundation Models
Rupf, Thomas, Bagatella, Marco, Vlastelica, Marin, Krause, Andreas
–arXiv.org Artificial Intelligence
Behavior Foundation Models (BFMs) are capable of retrieving high-performing policy for any reward function specified directly at test-time, commonly referred to as zero-shot reinforcement learning (RL). While this is a very efficient process in terms of compute, it can be less so in terms of data: as a standard assumption, BFMs require computing rewards over a non-negligible inference dataset, assuming either access to a functional form of rewards, or significant labeling efforts. To alleviate these limitations, we tackle the problem of task inference purely through interaction with the environment at test-time. We propose OpTI-BFM, an optimistic decision criterion that directly models uncertainty over reward functions and guides BFMs in data collection for task inference. Formally, we provide a regret bound for well-trained BFMs through a direct connection to upper-confidence algorithms for linear bandits. Empirically, we evaluate OpTI-BFM on established zero-shot benchmarks, and observe that it enables successor-features-based BFMs to identify and optimize an unseen reward function in a handful of episodes with minimal compute overhead. Code is available at https://github.com/ThomasRupf/opti-bfm.
arXiv.org Artificial Intelligence
Oct-24-2025
- Country:
- Europe
- Germany (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Europe
- Genre:
- Research Report (0.50)
- Workflow (0.67)
- Industry:
- Education (0.48)
- Technology: