Goto

Collaborating Authors

 Lai, Guannan


Order-Robust Class Incremental Learning: Graph-Driven Dynamic Similarity Grouping

arXiv.org Artificial Intelligence

Class Incremental Learning (CIL) aims to enable models to learn new classes sequentially while retaining knowledge of previous ones. Although current methods have alleviated catastrophic forgetting (CF), recent studies highlight that the performance of CIL models is highly sensitive to the order of class arrival, particularly when sequentially introduced classes exhibit high inter-class similarity. To address this critical yet understudied challenge of class order sensitivity, we first extend existing CIL frameworks through theoretical analysis, proving that grouping classes with lower pairwise similarity during incremental phases significantly improves model robustness to order variations. Building on this insight, we propose Graph-Driven Dynamic Similarity Grouping (GDDSG), a novel method that employs graph coloring algorithms to dynamically partition classes into similarity-constrained groups. Each group trains an isolated CIL sub-model and constructs meta-features for class group identification. Experimental results demonstrate that our method effectively addresses the issue of class order sensitivity while achieving optimal performance in both model accuracy and anti-forgetting capability. Our code is available at https://github.com/AIGNLAI/GDDSG.


Exploring Open-world Continual Learning with Knowns-Unknowns Knowledge Transfer

arXiv.org Artificial Intelligence

--Open-World Continual Learning (OWCL) is a challenging paradigm where models must incrementally learn new knowledge without forgetting while operating under an open-world assumption. This requires handling incomplete training data and recognizing unknown samples during inference. However, existing OWCL methods often treat open detection and continual learning as separate tasks, limiting their ability to integrate open-set detection and incremental classification in OWCL. Moreover, current approaches primarily focus on transferring knowledge from known samples, neglecting the insights derived from unknown/open samples. T o address these limitations, we formalize four distinct OWCL scenarios and conduct comprehensive empirical experiments to explore potential challenges in OWCL. Our findings reveal a significant interplay between the open detection of unknowns and incremental classification of knowns, challenging a widely held assumption that unknown detection and known classification are orthogonal processes. Building on our insights, we propose HoliTrans (Holistic Knowns-Unknowns Knowledge Transfer), a novel OWCL framework that integrates nonlinear random projection (NRP) to create a more linearly separable embedding space and distribution-aware prototypes (DAPs) to construct an adaptive knowledge space. Particularly, our HoliTrans effectively supports knowledge transfer for both known and unknown samples while dynamically updating representations of open samples during OWCL. Extensive experiments across various OWCL scenarios demonstrate that HoliTrans outperforms 22 competitive baselines, bridging the gap between OWCL theory and practice and providing a robust, scalable framework for advancing open-world learning paradigms. Open-World Continual Learning (OWCL) [1], [2] represents a highly practical yet profoundly challenging machine learning paradigm. In OWCL, a model must continually adapt to an unbounded sequence of tasks in a dynamic open environment [3], [4], where novelties might emerge in testing unpredictably over time [5]-[7]. Xin Y ang is the corresponding author (yangxin@swufe.edu.cn). Y ujie Li, Guannan Lai, Xin Y ang and Y onghao Li are with the Southwestern University of Finance and Economics, China (E-mail: liyj1201@gmail.com, Y ujie Li and Marcello Bonsangue are with the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Netherlands (E-mail: liyj1201@gmail.com, Tianrui Li is with the School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China (e-mail: trli@swjtu.edu.cn). Manuscript received XX XX, 2025; revised XX XX, 2025.


A New Perspective on Privacy Protection in Federated Learning with Granular-Ball Computing

arXiv.org Artificial Intelligence

Federated Learning (FL) facilitates collaborative model training while prioritizing privacy by avoiding direct data sharing. However, most existing articles attempt to address challenges within the model's internal parameters and corresponding outputs, while neglecting to solve them at the input level. To address this gap, we propose a novel framework called Granular-Ball Federated Learning (GrBFL) for image classification. GrBFL diverges from traditional methods that rely on the finest-grained input data. Instead, it segments images into multiple regions with optimal coarse granularity, which are then reconstructed into a graph structure. We designed a two-dimensional binary search segmentation algorithm based on variance constraints for GrBFL, which effectively removes redundant information while preserving key representative features. Extensive theoretical analysis and experiments demonstrate that GrBFL not only safeguards privacy and enhances efficiency but also maintains robust utility, consistently outperforming other state-of-the-art FL methods. The code is available at https://github.com/AIGNLAI/GrBFL.