Goto

Collaborating Authors

 Nakanishi, Toshiki


Meta Learning as Bayes Risk Minimization

arXiv.org Machine Learning

We show that, when we cast meta-learning problem as BRM, the optimal solution Meta-Learning is a family of methods that use is given by the predictive distribution computed from a set of interrelated tasks to learn a model that the posterior distribution of the latent variable conditioned can quickly learn a new query task from a possibly against the contextual dataset. This result justifies the use of small contextual dataset. In this study, we the predictive distribution in many previous studies of meta use a probabilistic framework to formalize what learning, such as (Edwards & Storkey, 2017; Gordon et al., it means for two tasks to be related and reframe 2018; Garnelo et al., 2018). However, the optimality of the the meta-learning problem into the problem of predictive distribution cannot be guaranteed if one uses an Bayesian risk minimization (BRM). In our formulation, approximation of the posterior distribution that violates the the BRM optimal solution is given by the way the posterior distribution changes with the contextual predictive distribution computed from the posterior dataset, and this is unfortunately the case for most of the distribution of the task-specific latent variable aforementioned works. For example, the variance of the conditioned on the contextual dataset, and this posterior in these works do not converge to 0 as we take justifies the philosophy of Neural Process.


Graph Residual Flow for Molecular Graph Generation

arXiv.org Machine Learning

Statistical generative models for molecular graphs attract attention from many researchers from the fields of bio- and chemo-informatics. Among these models, invertible flow-based approaches are not fully explored yet. In this paper, we propose a powerful invertible flow for molecular graphs, called graph residual flow (GRF). The GRF is based on residual flows, which are known for more flexible and complex non-linear mappings than traditional coupling flows. We theoretically derive non-trivial conditions such that GRF is invertible, and present a way of keeping the entire flows invertible throughout the training and sampling. Experimental results show that a generative model based on the proposed GRF achieves comparable generation performance, with much smaller number of trainable parameters compared to the existing flow-based model.