Convergence of stochastic gradient descent on parameterized sphere with applications to variational Monte Carlo simulation

Abrahamsen, Nilin, Ding, Zhiyan, Goldshlager, Gil, Lin, Lin

arXiv.org Artificial Intelligence 

We analyze stochastic gradient descent (SGD) type algorithms on a high-dimensional sphere which is parameterized by a neural network up to a normalization constant. We provide a new algorithm for the setting of supervised learning and show its convergence both theoretically and numerically. We also provide the first proof of convergence for the unsupervised setting, which corresponds to the widely used variational Monte Carlo (VMC) method in quantum physics.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found