class-incremental learning
FeDMRA: Federated Incremental Learning with Dynamic Memory Replay Allocation
Wang, Tiantian, Xiang, Xiang, Du, Simon S.
In federated healthcare systems, Federated Class-Incremental Learning (FCIL) has emerged as a key paradigm, enabling continuous adaptive model learning among distributed clients while safeguarding data privacy. However, in practical applications, data across agent nodes within the distributed framework often exhibits non-independent and identically distributed (non-IID) characteristics, rendering traditional continual learning methods inapplicable. To address these challenges, this paper covers more comprehensive incremental task scenarios and proposes a dynamic memory allocation strategy for exemplar storage based on the data replay mechanism. This strategy fully taps into the inherent potential of data heterogeneity, while taking into account the performance fairness of all participating clients, thereby establishing a balanced and adaptive solution to mitigate catastrophic forgetting. Unlike the fixed allocation of client exemplar memory, the proposed scheme emphasizes the rational allocation of limited storage resources among clients to improve model performance. Furthermore, extensive experiments are conducted on three medical image datasets, and the results demonstrate significant performance improvements compared to existing baseline models.
- North America > United States (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Diagnostic Medicine > Imaging (0.48)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Poland > Masovia Province > Warsaw (0.04)
- Europe > Poland > Pomerania Province > Gdańsk (0.04)
- Education (0.46)
- Information Technology (0.46)
- North America > Canada > Ontario > Kingston (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology (0.46)
- Education > Educational Setting (0.46)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > California (0.04)
Supplementary Materials for FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning
Since the Resnet-18 feature extractor uses a ReLU activation function, the feature representation values are all non-negative, so the inputs to tukey's ladder of powers transformation are all valid. As expected, the performance of both methods drops a bit when the pre-training is not done on the similar classes. Still FeCAM outperforms NCM by about 10% on the final accuracy. In Algorithm 1, we present the pseudo code for using FeCAM classifier.Algorithm 1 FeCAM Require: Training data (D
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > Vietnam (0.04)
- Asia > China > Sichuan Province (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)