Goto

Collaborating Authors

 sample adaptive mcmc


Sample Adaptive MCMC

Neural Information Processing Systems

For MCMC methods like Metropolis-Hastings, tuning the proposal distribution is important in practice for effective sampling from the target distribution \pi. In this paper, we present Sample Adaptive MCMC (SA-MCMC), a MCMC method based on a reversible Markov chain for \pi^{\otimes N} that uses an adaptive proposal distribution based on the current state of N points and a sequential substitution procedure with one new likelihood evaluation per iteration and at most one updated point each iteration. The SA-MCMC proposal distribution automatically adapts within its parametric family to best approximate the target distribution, so in contrast to many existing MCMC methods, SA-MCMC does not require any tuning of the proposal distribution. Instead, SA-MCMC only requires specifying the initial state of N points, which can often be chosen a priori, thereby automating the entire sampling procedure with no tuning required. Experimental results demonstrate the fast adaptation and effective sampling of SA-MCMC.


Reviews: Sample Adaptive MCMC

Neural Information Processing Systems

EDIT: After reading the author's rebuttal, I changed my assessment of the paper to an accept. The paper is well written and it does a good job at explaining the intuition behind the proposed algorithm. I appreciated the inclusion of the small dimensional toy example as it illustrates in a simple and clear manner the adaptability property of the algorithm. My main concern with the proposed algorithm is that, in my opinion, it is most suitable for small dimensional problems only. The provided examples further justify my impression given that posterior distribution to sample from is of reduced dimension. Consequently, I'm having a hard time justifying the interest of the ML community with respect to the proposed sampling algorithm considering its perceived limited scope.


Reviews: Sample Adaptive MCMC

Neural Information Processing Systems

The authors should be commended on writing and submitting a interesting paper on an important topic: adaptive MCMC with an "active" set of samples for estimating a proposal distribution online. The ergodicity theoretical contribution outweighed the deficiencies noted by the reviewers. It would be great if the authors, when working towards the camera-ready, heed the reviewer advice; particularly as it relates to covering related work.


Sample Adaptive MCMC

Neural Information Processing Systems

For MCMC methods like Metropolis-Hastings, tuning the proposal distribution is important in practice for effective sampling from the target distribution \pi. In this paper, we present Sample Adaptive MCMC (SA-MCMC), a MCMC method based on a reversible Markov chain for \pi {\otimes N} that uses an adaptive proposal distribution based on the current state of N points and a sequential substitution procedure with one new likelihood evaluation per iteration and at most one updated point each iteration. The SA-MCMC proposal distribution automatically adapts within its parametric family to best approximate the target distribution, so in contrast to many existing MCMC methods, SA-MCMC does not require any tuning of the proposal distribution. Instead, SA-MCMC only requires specifying the initial state of N points, which can often be chosen a priori, thereby automating the entire sampling procedure with no tuning required. Experimental results demonstrate the fast adaptation and effective sampling of SA-MCMC.


Sample Adaptive MCMC

Zhu, Michael

Neural Information Processing Systems

For MCMC methods like Metropolis-Hastings, tuning the proposal distribution is important in practice for effective sampling from the target distribution \pi. In this paper, we present Sample Adaptive MCMC (SA-MCMC), a MCMC method based on a reversible Markov chain for \pi {\otimes N} that uses an adaptive proposal distribution based on the current state of N points and a sequential substitution procedure with one new likelihood evaluation per iteration and at most one updated point each iteration. The SA-MCMC proposal distribution automatically adapts within its parametric family to best approximate the target distribution, so in contrast to many existing MCMC methods, SA-MCMC does not require any tuning of the proposal distribution. Instead, SA-MCMC only requires specifying the initial state of N points, which can often be chosen a priori, thereby automating the entire sampling procedure with no tuning required. Experimental results demonstrate the fast adaptation and effective sampling of SA-MCMC.