convnp
Supplementary Material: Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes
We first review the notation introduced in the main body for convenience. S denote a context and target set respectively. Later, as is common in recent meta-learning approaches, we will consider predicting the target set from the context set Garnelo et al. [3, 4]. The measurable sets of ฮฃ are those which can be specified by the values of the function at a countable subset I X of its input locations. Since in practice we only ever observe data at a finite number of points, this is sufficient for our purposes. Hence we may think of these stochastic processes as defined by their finite-dimensional marginals. We now define what it means to condition on observations of the stochastic process P. Let p(y|X) denote the density with respect to Lebesgue measure of the finite marginal of P with index set X (we assume these densities always exist). Strictly speaking, this is non-standard terminology, since P is the law of a stochastic process.
Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data. They enable the statistical symmetry of underlying physical phenomena to be leveraged, thereby aiding generalization. Prediction in such models can be viewed as a translation equivariant map from observed data sets to predictive SPs, emphasizing the intimate relationship between stationarity and equivariance. Building on this, we propose the Convolutional Neural Process (ConvNP), which endows Neural Processes (NPs) with translation equivariance and extends convolutional conditional NPs to allow for dependencies in the predictive distribution. The latter enables ConvNPs to be deployed in settings which require coherent samples, such as Thompson sampling or conditional image completion. Moreover, we propose a new maximum-likelihood objective to replace the standard ELBO objective in NPs, which conceptually simplifies the framework and empirically improves performance. We demonstrate the strong performance and generalization capabilities of ConvNPs on 1D regression, image completion, and various tasks with real-world spatio-temporal data.
Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes
Foong, Andrew Y. K., Bruinsma, Wessel P., Gordon, Jonathan, Dubois, Yann, Requeima, James, Turner, Richard E.
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data. They enable the statistical symmetry of underlying physical phenomena to be leveraged, thereby aiding generalization. Prediction in such models can be viewed as a translation equivariant map from observed data sets to predictive SPs, emphasizing the intimate relationship between stationarity and equivariance. Building on this, we propose the Convolutional Neural Process (ConvNP), which endows Neural Processes (NPs) with translation equivariance and extends convolutional conditional NPs to allow for dependencies in the predictive distribution. The latter enables ConvNPs to be deployed in settings which require coherent samples, such as Thompson sampling or conditional image completion. Moreover, we propose a new maximum-likelihood objective to replace the standard ELBO objective in NPs, which conceptually simplifies the framework and empirically improves performance. We demonstrate the strong performance and generalization capabilities of ConvNPs on 1D regression, image completion, and various tasks with real-world spatio-temporal data.