Towards Continuous Intelligence Growth: Self-Training, Continual Learning, and Dual-Scale Memory in SuperIntelliAgent

Lin, Jianzhe, Pan, Zeyu, Zhu, Yun, Song, Ruiqi, Yang, Jining

arXiv.org Artificial Intelligence 

W e introduce SuperIntelliAgent, an agentic learning framework that couples a trainable small diffusion model (the learner) with a frozen large language model (the verifier), enabling continual intelligence growth through self-supervised interaction. Unlike conventional supervised fine-tuning with annotated data, SuperIntelliAgent learns autonomously in an annotation-free manner: the learner generates candidate outputs, the verifier evaluates them via step-by-step reasoning, and the learner-verifier interaction loop produces chosen/rejected pairs for Direct Preference Optimization (DPO), transforming every input into a pseudo-training signal for continual self-improvement. The framework integrates a dual-scale memory mechanism--short-term, in-context memory that preserves reasoning traces across iterative refinement cycles, and long-term memory that consolidates acquired knowledge into model parameters through on-the-fly fine-tuning. T o enhance optimization, a replay buffer selectively retains samples showing verifiable progress from failed to satisfied conditions and replays them as auxiliary supervision, reinforcing recent learning while bootstrapping adaptive curricula that accelerate intelligence acquisition. Designed to be infrastructure-agnostic, SuperIntelliAgent can be seamlessly integrated into existing agentic frameworks (e.g., autogen, semantic kernel, etc.), while simultaneously transforming ordinary inference cycles into lifelong optimization. W e posit that agentic pairing constitutes the minimal reliable unit of growing intelligence, as paired feedback, augmented with partial-history replay, yields richer learning curricula, tighter preference alignment, and stronger generalization. With extremely few DPO pairs generated automatically by SuperIntelliAgent and used for lightweight fine-tuning, the learner performance improves across all benchmarks.