Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space

Nüsken, Nikolas, Richter, Lorenz

arXiv.org Machine Learning 

Hamilton-Jacobi-Bellman partial differential equations (HJB-PDEs) are of central importance in applied mathematics. Rooted in reformulations of classical mechanics [45] in the nineteenth century, they nowadays form the backbone of (stochastic) optimal control theory [81, 115], having a profound impact on neighbouring fields such as optimal transportation [109, 110], mean field games [20], backward stochastic differential equations (BSDEs) [19] and large deviations [39]. Applications in science and engineering abound; examples include stochastic filtering and data assimilation [79, 95], the simulation of rare events in molecular dynamics [51, 54, 119], and nonconvex optimisation [24]. Many of these applications involve HJB-PDEs in high-dimensional or even infinite-dimensional state spaces, posing a formidable challenge for their numerical treatment and in particular rendering grid-based schemes infeasible. In recent years, approaches to approximating the solutions of high-dimensional elliptic and parabolic PDEs have been developed combining well-known Feynman-Kac formulae with machine learning methodologies, seeking scalability and robustness in high-dimensional and complex scenarios [50, 111]. Crucially, the use of artificial neural networks offers the promise of accurate and efficient function approximation which in conjunction with Monte Carlo methods can beat the curse of dimensionality, as investigated in [5, 25, 49, 60].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found