Meta Learning as Bayes Risk Minimization

Maeda, Shin-ichi, Nakanishi, Toshiki, Koyama, Masanori

arXiv.org Machine Learning 

We show that, when we cast meta-learning problem as BRM, the optimal solution Meta-Learning is a family of methods that use is given by the predictive distribution computed from a set of interrelated tasks to learn a model that the posterior distribution of the latent variable conditioned can quickly learn a new query task from a possibly against the contextual dataset. This result justifies the use of small contextual dataset. In this study, we the predictive distribution in many previous studies of meta use a probabilistic framework to formalize what learning, such as (Edwards & Storkey, 2017; Gordon et al., it means for two tasks to be related and reframe 2018; Garnelo et al., 2018). However, the optimality of the the meta-learning problem into the problem of predictive distribution cannot be guaranteed if one uses an Bayesian risk minimization (BRM). In our formulation, approximation of the posterior distribution that violates the the BRM optimal solution is given by the way the posterior distribution changes with the contextual predictive distribution computed from the posterior dataset, and this is unfortunately the case for most of the distribution of the task-specific latent variable aforementioned works. For example, the variance of the conditioned on the contextual dataset, and this posterior in these works do not converge to 0 as we take justifies the philosophy of Neural Process.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found