Task-unaware Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation

Yang, Pengzhi, Wang, Xinyu, Zhang, Ruipeng, Wang, Cong, Oliehoek, Frans, Kober, Jens

arXiv.org Artificial Intelligence 

Real-world environments require robots to continuously acquire new skills while retaining previously learned abilities, all without the need for clearly defined task boundaries. Storing all past data to prevent forgetting is impractical due to storage and privacy concerns. To address this, we propose a method that efficiently restores a robot's proficiency in previously learned tasks over its lifespan. Using an Episodic Memory (EM), our approach enables experience replay during training and retrieval during testing for local fine-tuning, allowing rapid adaptation to previously encountered problems. Additionally, we introduce a selective weighting mechanism that emphasizes the most challenging segments of retrieved demonstrations, focusing local adaptation where it is most needed. This framework offers a scalable solution for lifelong learning without explicit task identifiers or implicit task boundaries, combining retrieval-based adaptation with selective weighting to enhance robot performance in open-ended scenarios. Our approach addresses the challenge of lifelong learning without distinct task boundaries. To emulate human learning patterns, we propose a method consisting of three phases: learning, reviewing, and testing. In the learning phase, the robot is exposed to various demonstrations, storing a subset of this data as episodic memory M. This balance between stability and plasticity is crucial as models face sequences of tasks over time.