FAST: Similarity-based Knowledge Transfer for Efficient Policy Learning

Capurso, Alessandro, Piccoli, Elia, Bacciu, Davide

arXiv.org Artificial Intelligence 

--Transfer Learning (TL) offers the potential to accelerate learning by transferring knowledge across tasks. However, it faces critical challenges such as negative transfer, domain adaptation and inefficiency in selecting solid source policies. These issues often represent critical problems in evolving domains, i.e. game development, where scenarios transform and agents must adapt. The continuous release of new agents is costly and inefficient. In this work we challenge the key issues in TL to improve knowledge transfer, agents performance across tasks and reduce computational costs. The proposed methodology, called F AST - Framework for Adaptive Similarity-based Transfer, leverages visual frames and textual descriptions to create a latent representation of tasks dynamics, that is exploited to estimate similarity between environments. The similarity scores guides our method in choosing candidate policies from which transfer abilities to simplify learning of novel tasks. Experimental results, over multiple racing tracks, demonstrate that F AST achieves competitive final performance compared to learning-from-scratch methods while requiring significantly less training steps. Learning is often thought of as a process rooted in interactions with the environment. Reinforcement Learning (RL) expands on this core concept by viewing learning as a trial-and error process, in which agents engage with the environment, make choices, and receive feedback in the form of reward or penalties. Traditionally, agents are trained from scratch to accomplish a single task, requiring extensive interactions with the environment to achieve proficiency far more than a human would need for comparable tasks. One primary challenge in RL is the substantial computational demands imposed by simulation, where training time and data requirements scale up for complex tasks. In game development and other evolving environments it is expensive and sub-optimal to start at each iteration from zero.