Variance reduction for Random Coordinate Descent-Langevin Monte Carlo
–Neural Information Processing Systems
Sampling from a log-concave distribution function is one core problem that has wide applications in Bayesian statistics and machine learning. While most gradient free methods have slow convergence rate, the Langevin Monte Carlo (LMC) that provides fast convergence requires the computation of gradients. In practice one uses finite-differencing approximations as surrogates, and the method is expensive in high-dimensions. A natural strategy to reduce computational cost in each iteration is to utilize random gradient approximations, such as random coordinate descent (RCD) or simultaneous perturbation stochastic approximation (SPSA). We show by a counterexample that blindly applying RCD does not achieve the goal in the most general setting.
Neural Information Processing Systems
Jan-22-2025, 19:18:29 GMT
- Country:
- North America
- Canada (0.46)
- United States > Wisconsin (0.28)
- North America
- Industry:
- Energy > Oil & Gas
- Upstream (1.00)
- Health & Medicine (0.69)
- Energy > Oil & Gas
- Technology: