Goto

Collaborating Authors

 pdim




Sobolev_VC

Yahong Yang

Neural Information Processing Systems

This allows us to bound each term in the sum using Lemma 1. Therefore, we conclude that 64 C ˆ C . For the last equation Eq. (27) is due to The proof of the first inequality Eq. (30) can be found in [ Eq. (31) can be obtained via induction. We prove this lemma via induction. Hence we finish our proof.







Provably data-driven projection method for quadratic programming

Nguyen, Anh Tuan, Nguyen, Viet Anh

arXiv.org Artificial Intelligence

Projection methods aim to reduce the dimensionality of the optimization instance, thereby improving the scalability of high-dimensional problems. Recently, Sakaue and Oki proposed a data-driven approach for linear programs (LPs), where the projection matrix is learned from observed problem instances drawn from an application-specific distribution of problems. We analyze the generalization guarantee for the data-driven projection matrix learning for convex quadratic programs (QPs). Unlike in LPs, the optimal solutions of convex QPs are not confined to the vertices of the feasible polyhedron, and this complicates the analysis of the optimal value function. To overcome this challenge, we demonstrate that the solutions of convex QPs can be localized within a feasible region corresponding to a special active set, utilizing Caratheodory's theorem. Building on such observation, we propose the unrolled active set method, which models the computation of the optimal value as a Goldberg-Jerrum (GJ) algorithm with bounded complexities, thereby establishing learning guarantees. We then further extend our analysis to other settings, including learning to match the optimal solution and input-aware setting, where we learn a mapping from QP problem instances to projection matrices.