RMM: Reinforced Memory Management for Class-Incremental Learning Supplementary Materials

Neural Information Processing Systems 

This is supplementary to Section 5.2 " This is supplementary to Section 5.2 " This is supplementary to to Section 5.1 " This is supplementary to Section 5.2 " To evaluate the performance of our RMM in unknown scenarios, we supplemented the experiments of using the policy functions trained "in distinct numbers of phases" and "on different datasets" and show the testing results of CIFAR-100 in Table S5. ", using the policy learned on "ImageNet-Subset, For example, Row 5 is for training the policy on "ImageNet-Subset, This is supplementary to Section 5.2 " In Table S6, we can see the clear improvements, e.g., ImageNet-Subset [4]) are available in Table S7. No. Method Policy learned on N =5 N =10 N =25 1 Baseline - 49.02 44.59 38.23 2 w/ RMM ImageNet-Subset 53.15 50.05 42.89 This is supplementary to Section 5.2 " This is supplementary to Section 5.2 " We run our experiments using GPU workstations as follows, 4 No. Method Memory budget of exemplars N =5 N =10 N =25 1 Baseline 1000 64.31 60.97 58.77 2 w/ RMM (ours) 1000 68.20 65.57 Row 1 (baseline) is from the sota method POD-AANets [10].