You Only Live Once: Single-Life Reinforcement Learning Annie S. Chen 1, Chelsea Finn 1
–Neural Information Processing Systems
Reinforcement learning algorithms are typically designed to learn a performant policy that can repeatedly and autonomously complete a task, usually starting from scratch. However, in many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial. For example, imagine a disaster relief robot tasked with retrieving an item from a fallen building, where it cannot get direct supervision from humans. It must retrieve this object within one test-time trial, and must do so while tackling unknown obstacles, though it may leverage knowledge it has of the building before the disaster. We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty. SLRL provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations, and we find that algorithms designed for standard episodic reinforcement learning often struggle to recover from out-of-distribution states in this setting.
Neural Information Processing Systems
May-30-2025, 08:35:56 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.46)
- Industry:
- Education > Educational Setting (0.67)
- Technology: