Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis

Neural Information Processing Systems 

We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms. As compared to previous bounds that depend on the square norms of gradients, empirical validations on both simulated data and a well-known few-shot benchmark show that our bound is orders of magnitude tighter in most conditions.