Goto

Collaborating Authors

 class-incremental learning


FeDMRA: Federated Incremental Learning with Dynamic Memory Replay Allocation

Wang, Tiantian, Xiang, Xiang, Du, Simon S.

arXiv.org Machine Learning

In federated healthcare systems, Federated Class-Incremental Learning (FCIL) has emerged as a key paradigm, enabling continuous adaptive model learning among distributed clients while safeguarding data privacy. However, in practical applications, data across agent nodes within the distributed framework often exhibits non-independent and identically distributed (non-IID) characteristics, rendering traditional continual learning methods inapplicable. To address these challenges, this paper covers more comprehensive incremental task scenarios and proposes a dynamic memory allocation strategy for exemplar storage based on the data replay mechanism. This strategy fully taps into the inherent potential of data heterogeneity, while taking into account the performance fairness of all participating clients, thereby establishing a balanced and adaptive solution to mitigate catastrophic forgetting. Unlike the fixed allocation of client exemplar memory, the proposed scheme emphasizes the rational allocation of limited storage resources among clients to improve model performance. Furthermore, extensive experiments are conducted on three medical image datasets, and the results demonstrate significant performance improvements compared to existing baseline models.





LearningaCondensed FrameforMemory-Efficient VideoClass-IncrementalLearning

Neural Information Processing Systems

Recent incremental learning for action recognition usually stores representative videos to mitigate catastrophic forgetting. However, only a few bulky videos can be stored due to the limited memory.




Supplementary Materials for FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning

Neural Information Processing Systems

Since the Resnet-18 feature extractor uses a ReLU activation function, the feature representation values are all non-negative, so the inputs to tukey's ladder of powers transformation are all valid. As expected, the performance of both methods drops a bit when the pre-training is not done on the similar classes. Still FeCAM outperforms NCM by about 10% on the final accuracy. In Algorithm 1, we present the pseudo code for using FeCAM classifier.Algorithm 1 FeCAM Require: Training data (D