Exploring Open-world Continual Learning with Knowns-Unknowns Knowledge Transfer

Li, Yujie, Lai, Guannan, Yang, Xin, Li, Yonghao, Bonsangue, Marcello, Li, Tianrui

arXiv.org Artificial Intelligence 

--Open-World Continual Learning (OWCL) is a challenging paradigm where models must incrementally learn new knowledge without forgetting while operating under an open-world assumption. This requires handling incomplete training data and recognizing unknown samples during inference. However, existing OWCL methods often treat open detection and continual learning as separate tasks, limiting their ability to integrate open-set detection and incremental classification in OWCL. Moreover, current approaches primarily focus on transferring knowledge from known samples, neglecting the insights derived from unknown/open samples. T o address these limitations, we formalize four distinct OWCL scenarios and conduct comprehensive empirical experiments to explore potential challenges in OWCL. Our findings reveal a significant interplay between the open detection of unknowns and incremental classification of knowns, challenging a widely held assumption that unknown detection and known classification are orthogonal processes. Building on our insights, we propose HoliTrans (Holistic Knowns-Unknowns Knowledge Transfer), a novel OWCL framework that integrates nonlinear random projection (NRP) to create a more linearly separable embedding space and distribution-aware prototypes (DAPs) to construct an adaptive knowledge space. Particularly, our HoliTrans effectively supports knowledge transfer for both known and unknown samples while dynamically updating representations of open samples during OWCL. Extensive experiments across various OWCL scenarios demonstrate that HoliTrans outperforms 22 competitive baselines, bridging the gap between OWCL theory and practice and providing a robust, scalable framework for advancing open-world learning paradigms. Open-World Continual Learning (OWCL) [1], [2] represents a highly practical yet profoundly challenging machine learning paradigm. In OWCL, a model must continually adapt to an unbounded sequence of tasks in a dynamic open environment [3], [4], where novelties might emerge in testing unpredictably over time [5]-[7]. Xin Y ang is the corresponding author (yangxin@swufe.edu.cn). Y ujie Li, Guannan Lai, Xin Y ang and Y onghao Li are with the Southwestern University of Finance and Economics, China (E-mail: liyj1201@gmail.com, Y ujie Li and Marcello Bonsangue are with the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Netherlands (E-mail: liyj1201@gmail.com, Tianrui Li is with the School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China (e-mail: trli@swjtu.edu.cn). Manuscript received XX XX, 2025; revised XX XX, 2025.