Plotting

 Halder, Abhishek


Convex and Nonconvex Sublinear Regression with Application to Data-driven Learning of Reach Sets

arXiv.org Artificial Intelligence

We consider estimating a compact set from finite data by approximating the support function of that set via sublinear regression. Support functions uniquely characterize a compact set up to closure of convexification, and are sublinear (convex as well as positive homogeneous of degree one). Conversely, any sublinear function is the support function of a compact set. We leverage this property to transcribe the task of learning a compact set to that of learning its support function. We propose two algorithms to perform the sublinear regression, one via convex and another via nonconvex programming. The convex programming approach involves solving a quadratic program (QP). The nonconvex programming approach involves training a input sublinear neural network. We illustrate the proposed methods via numerical examples on learning the reach sets of controlled dynamics subject to set-valued input uncertainties from trajectory data.


A Physics-informed Deep Learning Approach for Minimum Effort Stochastic Control of Colloidal Self-Assembly

arXiv.org Artificial Intelligence

We propose formulating the finite-horizon stochastic optimal control problem for colloidal self-assembly in the space of probability density functions (PDFs) of the underlying state variables (namely, order parameters). The control objective is formulated in terms of steering the state PDFs from a prescribed initial probability measure towards a prescribed terminal probability measure with minimum control effort. For specificity, we use a univariate stochastic state model from the literature. Both the analysis and the computational steps for control synthesis as developed in this paper generalize for multivariate stochastic state dynamics given by generic nonlinear in state and non-affine in control models. We derive the conditions of optimality for the associated optimal control problem. This derivation yields a system of three coupled partial differential equations together with the boundary conditions at the initial and terminal times. The resulting system is a generalized instance of the so-called Schr\"{o}dinger bridge problem. We then determine the optimal control policy by training a physics-informed deep neural network, where the "physics" are the derived conditions of optimality. The performance of the proposed solution is demonstrated via numerical simulations on a benchmark colloidal self-assembly problem.


Global Convergence of Second-order Dynamics in Two-layer Neural Networks

arXiv.org Artificial Intelligence

Recent results have shown that for two-layer fully connected neural networks, gradient flow converges to a global optimum in the infinite width limit, by making a connection between the mean field dynamics and the Wasserstein gradient flow. These results were derived for first-order gradient flow, and a natural question is whether second-order dynamics, i.e., dynamics with momentum, exhibit a similar guarantee. We show that the answer is positive for the heavy ball method. In this case, the resulting integro-PDE is a nonlinear kinetic Fokker Planck equation, and unlike the first-order case, it has no apparent connection with the Wasserstein gradient flow. Instead, we study the variations of a Lyapunov functional along the solution trajectories to characterize the stationary points and to prove convergence. While our results are asymptotic in the mean field limit, numerical simulations indicate that global convergence may already occur for reasonably small networks.