Cross-Task Experiential Learning on LLM-based Multi-Agent Collaboration
Li, Yilong, Qian, Chen, Xia, Yu, Shi, Ruijie, Dang, Yufan, Xie, Zihao, You, Ziming, Chen, Weize, Yang, Cheng, Liu, Weichuan, Tian, Ye, Xiong, Xuantang, Han, Lei, Liu, Zhiyuan, Sun, Maosong
–arXiv.org Artificial Intelligence
Large Language Model-based multi-agent systems (MAS) have shown remarkable progress in solving complex tasks through collaborative reasoning and inter-agent critique. However, existing approaches typically treat each task in isolation, resulting in redundant computations and limited generalization across structurally similar tasks. To address this, we introduce multi-agent cross-task experiential learning (MAEL), a novel framework that endows LLM-driven agents with explicit cross-task learning and experience accumulation. We model the task-solving workflow on a graph-structured multi-agent collaboration network, where agents propagate information and coordinate via explicit connectivity. During the experiential learning phase, we quantify the quality for each step in the task-solving workflow and store the resulting rewards along with the corresponding inputs and outputs into each agent's individual experience pool. During inference, agents retrieve high-reward, task-relevant experiences as few-shot examples to enhance the effectiveness of each reasoning step, thereby enabling more accurate and efficient multi-agent collaboration. Experimental results on diverse datasets demonstrate that MAEL empowers agents to learn from prior task experiences effectively-achieving faster convergence and producing higher-quality solutions on current tasks.
arXiv.org Artificial Intelligence
May-30-2025