Coullon, Jeremie
BlackJAX: Composable Bayesian inference in JAX
Cabezas, Alberto, Corenflos, Adrien, Lao, Junpeng, Louf, Rémi, Carnec, Antoine, Chaudhari, Kaustubh, Cohn-Gordon, Reuben, Coullon, Jeremie, Deng, Wei, Duffield, Sam, Durán-Martín, Gerardo, Elantkowski, Marcin, Foreman-Mackey, Dan, Gregori, Michele, Iguaran, Carlos, Kumar, Ravin, Lysy, Martin, Murphy, Kevin, Orduz, Juan Camilo, Patel, Karm, Wang, Xi, Zinkov, Rob
BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation. It is designed for ease of use, speed, and modularity by taking a functional approach to the algorithms' implementation. BlackJAX is written in Python, using JAX to compile and run NumpPy-like samplers and variational methods on CPUs, GPUs, and TPUs. The library integrates well with probabilistic programming languages by working directly with the (un-normalized) target log density function. BlackJAX is intended as a collection of low-level, composable implementations of basic statistical 'atoms' that can be combined to perform well-defined Bayesian inference, but also provides high-level routines for ease of use. It is designed for users who need cutting-edge methods, researchers who want to create complex sampling methods, and people who want to learn how these work.
Stochastic Gradient MCMC with Multi-Armed Bandit Tuning
Coullon, Jeremie, South, Leah, Nemeth, Christopher
Most MCMC algorithms contain user-controlled hyperparameters which need to be carefully selected to ensure that the MCMC algorithm explores the posterior distribution efficiently. Optimal tuning rates for many popular MCMC algorithms such the random-walk (Gelman et al., 1997) or Metropolis-adjusted Langevin algorithms (Roberts and Rosenthal, 1998) rely on setting the tuning parameters according to the Metropolis-Hastings acceptance rate. Using metrics such as the acceptance rate, hyperparameters can be optimized on-the-fly within the MCMC algorithm using adaptive MCMC (Andrieu and Thoms, 2008; Vihola, 2012). However, in the context of stochastic gradient MCMC (SGMCMC), there is no acceptance rate to tune against and the trade-off between bias and variance for a fixed computational budget means that tuning approaches designed for target invariant MCMC algorithms are not applicable. Related work Previous adaptive SGMCMC algorithms have focused on embedding ideas from the optimization literature within the SGMCMC framework, e.g.