Improving Gradient-guided Nested Sampling for Posterior Inference

Lemos, Pablo, Malkin, Nikolay, Handley, Will, Bengio, Yoshua, Hezaveh, Yashar, Perreault-Levasseur, Laurence

arXiv.org Machine Learning 

Gaussian noise was then added to produce a noisy simulated data. Given the data, the posterior of a model (a pixelated image of the undistorted background source) could be calculated by adding the likelihood and the prior terms. Furthermore since the model is perfectly linear (and known) and the noise and the prior are Gaussian, the posterior is a high-dimensional Gaussian posterior that could be calculated analytically, allowing us to compare the samples drawn with GGNS with the analytic solution. Figure 2 shows a comparison between the true image, and its noise, and the one recovered by GGNS. We see that we can recover both the correct image, and the noise distribution. We emphasize that this is a uni-modal problem and that the experiment's goal is to demonstrate the capability of GGNS to sample in high dimensions (in this case, 256), such as images, and to test the agreement between the samples and a baseline analytic solution.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found