Review for NeurIPS paper: Gradient-EM Bayesian Meta-Learning
–Neural Information Processing Systems
Additional Feedback: I think the technical novelty of the proposed algorithm is somewhat limited since the resulting algorithm is essentially a Bayesian version of reptile (GEM-BML using L [1] loss). Nevertheless, I like the reinterpretation given in the paper, especially the co-ordinate decent view of meta-update decoupling the inner-level update and outer-level update. Any argument highlighting the technical novelty of the proposed method would be appreciated. The important aspect of the proposed method is its robustness due to being Bayesian. I think the paper could be strengthened by including more experiments to see this aspect by testing performance or calibration under distributional shift. The performance of GEM-BML and GEM-BML is not that impressive for typical few-shot learning settings (the difference with baselines are not significant in a statistical sense for some settings).
Neural Information Processing Systems
Feb-8-2025, 00:03:21 GMT
- Technology: