On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds
Jin, Ming, Khattar, Vanshaj, Kaushik, Harshal, Sel, Bilgehan, Jia, Ruoxi
–arXiv.org Artificial Intelligence
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.
arXiv.org Artificial Intelligence
Dec-2-2022
- Country:
- North America > United States (0.45)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Energy (0.93)
- Technology: