Toward Unlimited Self-Learning MCMC with Parallel Adaptive Annealing
Ichikawa, Yuma, Nakagawa, Akira, Masayuki, Hiromoto, Umeda, Yuhei
Its efficiency strongly depends on the choice of the proposal. Self-learning Monte Carlo (SLMC) methods are recently proposed to accelerate Markov chain Among recent advances in machine learning, a general Monte Carlo (MCMC) methods using a machine method called the self-learning Monte Carlo (SLMC) learning model. With latent generative models, method (Liu et al., 2017) was introduced to accelerate SLMC methods realize efficient Monte Carlo updates MCMC simulations by an automated proposal with a machine with less autocorrelation. However, SLMC learning model and has been applied to various problems methods are difficult to directly apply to multimodal (Xu et al., 2017; Shen et al., 2018). In particular, distributions for which training data are a latent generative model realizes efficient global update difficult to obtain. To solve the limitation, we propose through the obtained information-rich latent representation "parallel adaptive annealing," which makes (Huang & Wang, 2017; Albergo et al., 2019; Monroe & SLMC methods directly apply to multimodal distributions Shen, 2022; Tanaka & Tomiya, 2017). Although powerful, with a gradually trained proposal while the performance of SLMC simulation strongly depends annealing target distribution. Parallel adaptive annealing on an automated proposal with machine learning models is based on (i) sequential learning with and the quality of training data to train the proposal. For annealing to inherit and update the model parameters, example, it is challenging to directly use SLMC for multimodal (ii) adaptive annealing to automatically detect distributions because obtaining accurate training data under-learning, and (iii) parallel annealing to covering all modes is difficult.
Sep-19-2023