Appendix: VariationalContinualBayesian Meta-Learning
–Neural Information Processing Systems
In variational continual learning, the posterior distribution of interest is frequently intractable and approximation is required. We summarize the meta-training process of our VC-BML in algorithm 1. Moreover,we evaluate FTML onthe unseen tasks (i.e., tasks sampled from meta-test set) instead ofthe training tasksthattheoriginalFTMLused. It would be unfair to adopt the original initialization procedure in OSML. BOMVI [10]: In our experiments, we use variational inference to approximate the posterior of meta-parameters. E.3.2 Settings As the latent variables in this paper are meta-parameters and task-specific parameters, the dimensionality ofthelatent space isactually determined bythenumber ofparameters inthedeep neural network. In particular, we define a CNN architecture and present its details in Table 1.
Neural Information Processing Systems
Feb-11-2026, 05:51:12 GMT
- Technology: