On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

Barkhagen, M., Chau, N. H., Moulines, É., Rásonyi, M., Sabanis, S., Zhang, Y.

arXiv.org Machine Learning 

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbins-Monro type algorithm with Langevin dynamics in order to perform data-driven stochastic optimization. In this paper, the SGLD method with fixed step size $\lambda$ is considered in order to sample from a logconcave target distribution $\pi$, known up to a normalisation factor. We assume that unbiased estimates of the gradient from possibly dependent observations are available. It is shown that, for all $\varepsilon>0$, the Wasserstein-$2$ distance of the $n$th iterate of the SGLD algorithm from $\pi$ is dominated by $c_1(\varepsilon)[\lambda^{1/2 - \varepsilon}+e^{-a\lambda n}]$ with appropriate constants $c_1(\varepsilon), a>0$.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found