Goto

Collaborating Authors

 xtn


Sparse Weak-Form Discovery of Stochastic Generators

A, Eshwar R, Honnavar, Gajanan V.

arXiv.org Machine Learning

The proposed algorithm seeks to provide a novel data-driven framework for the discovery of stochastic differential equations (SDEs) by application of the Weak-formulation to stochastic SINDy. This Weak formulation of the algorithm provides a noise-robust methodology that avoids traditional noisy derivative computation using finite differences. An additional novelty is the adoption of spatial Gaussian test functions in place of temporal test functions, wherein the use of the kernel weight $K_j(X_{t_n})$ guarantees unbiasedness in expectation and prevents the structural regression bias that is otherwise pertinent with temporal test functions. The proposed framework converts the SDE identification problem into two SINDy based linear sparse identification problems. We validate the algorithm on three SDEs, for which we recover all active non-linear terms with coefficient errors below 4%, stationary-density total-variation distances below 0.01, and autocorrelation functions that reproduce true relaxation timescales across all three benchmarks faithfully.




max

Neural Information Processing Systems

The problem(1) with µy > 0 is called a weakly convex-strongly concave(WCSC) saddle-point problem, whereas forµy =0,itiscalledaweakly convex-merely concave(WCMC) saddle-point problem.


Is There an Analog of Nesterov Acceleration for MCMC?

Ma, Yi-An, Chatterji, Niladri, Cheng, Xiang, Flammarion, Nicolas, Bartlett, Peter, Jordan, Michael I.

arXiv.org Machine Learning

While optimization methodology has provided much of the underlying algorithmic machinery that has driven the theory and practice of machine learning in recent years, sampling-based methodology, in particular Markov chain Monte Carlo (MCMC), remains of critical importance, given its role in linking algorithms to statistical inference and, in particular, its ability to provide notions of confidence that are lacking in optimization-based methodology. However, the classical theory of MCMC is largely asymptotic and the theory has not developed as rapidly in recent years as the theory of optimization. Recently, however, a literature has emerged that derives nonasymptotic rates for MCMC algorithms [see, e.g., 9, 12, 10, 8, 6, 14, 21, 22, 2, 5]. This work has explicitly aimed at making use of ideas from optimization; in particular, whereas the classical literature on MCMC focused on reversible Markov chains, the recent literature has focused on nonreversible stochastic processes that are built on gradients [see, e.g., 18, 20, 3, 1]. In particular, the gradient-based Langevin algorithm [33, 32, 13] has been shown to be a form of gradient descent on the space of probabilities [see, e.g., 36]. What has not yet emerged is an analog of acceleration. Recall that the notion of acceleration has played a key role in gradient-based optimization methods [26]. In particular, the Nesterov accelerated gradient descent (AGD) method, an instance of the general family of "momentum methods," provably achieves faster convergence rate than gradient descent (GD) in a variety of settings [25]. Moreover, it achieves the optimal convergence rate under an oracle model of optimization complexity in the convex setting [24].