Learning based convex approximation for constrained parametric optimization
Liu, Kang, Peng, Wei, Hu, Jianchen
–arXiv.org Artificial Intelligence
We propose an input convex neural network (ICNN)-based self-supervised learning framework to solve continuous constrained optimization problems. By integrating the augmented Lagrangian method (ALM) with the constraint correction mechanism, our framework ensures \emph{non-strict constraint feasibility}, \emph{better optimality gap}, and \emph{best convergence rate} with respect to the state-of-the-art learning-based methods. We provide a rigorous convergence analysis, showing that the algorithm converges to a Karush-Kuhn-Tucker (KKT) point of the original problem even when the internal solver is a neural network, and the approximation error is bounded. We test our approach on a range of benchmark tasks including quadratic programming (QP), nonconvex programming, and large-scale AC optimal power flow problems. The results demonstrate that compared to existing solvers (e.g., \texttt{OSQP}, \texttt{IPOPT}) and the latest learning-based methods (e.g., DC3, PDL), our approach achieves a superior balance among accuracy, feasibility, and computational efficiency.
arXiv.org Artificial Intelligence
May-8-2025
- Country:
- Asia
- China > Shaanxi Province
- Xi'an (0.04)
- Middle East > Jordan (0.04)
- China > Shaanxi Province
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Arizona > Maricopa County
- Tempe (0.04)
- District of Columbia > Washington (0.04)
- Massachusetts (0.04)
- Arizona > Maricopa County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Energy > Power Industry (0.88)
- Technology: