Goto

Collaborating Authors

 bartlett





e3251075554389fe91d17a794861d47b-Paper.pdf

Neural Information Processing Systems

This perspectiveparallels an earlier phenomenon inthe much better understood field of optimization where convexity has played a preponderant role for both theoretical and methodological advances [Nes04; Bub15].





What Bigfoot hunters get right (and very wrong)

Popular Science

'Bigfooters' often employ credible scientific methods in their searches. Breakthroughs, discoveries, and DIY tips sent every weekday. Bigfoot remains firmly in the realm of cryptozoology, along with the likes of the Loch Ness monster . However, its pursuers often are not the stereotypical crackpots depicted across pop culture. According to two social scientists, they frequently rely on widely accepted, reliable methods and tools to search for the elusive Sasquatch.



Provably data-driven projection method for quadratic programming

Nguyen, Anh Tuan, Nguyen, Viet Anh

arXiv.org Artificial Intelligence

Projection methods aim to reduce the dimensionality of the optimization instance, thereby improving the scalability of high-dimensional problems. Recently, Sakaue and Oki proposed a data-driven approach for linear programs (LPs), where the projection matrix is learned from observed problem instances drawn from an application-specific distribution of problems. We analyze the generalization guarantee for the data-driven projection matrix learning for convex quadratic programs (QPs). Unlike in LPs, the optimal solutions of convex QPs are not confined to the vertices of the feasible polyhedron, and this complicates the analysis of the optimal value function. To overcome this challenge, we demonstrate that the solutions of convex QPs can be localized within a feasible region corresponding to a special active set, utilizing Caratheodory's theorem. Building on such observation, we propose the unrolled active set method, which models the computation of the optimal value as a Goldberg-Jerrum (GJ) algorithm with bounded complexities, thereby establishing learning guarantees. We then further extend our analysis to other settings, including learning to match the optimal solution and input-aware setting, where we learn a mapping from QP problem instances to projection matrices.