Scalable Mixed-Integer Optimization with Neural Constraints via Dual Decomposition
Zeng, Shuli, Zhang, Sijia, Wu, Feng, Tang, Shaojie, Li, Xiang-Yang
–arXiv.org Artificial Intelligence
Abstract--Embedding deep neural networks (NNs) into mixed-integer programs (MIPs) is attractive for decision making with learned constraints, yet state-of-the-art "monolithic" linearisa-tions blow up in size and quickly become intractable. In this paper, we introduce a novel dual-decomposition framework that relaxes the single coupling equality u = x with an augmented Lagrange multiplier and splits the problem into a vanilla MIP and a constrained NN block. Each part is tackled by the solver that suits it best--branch & cut for the MIP subproblem, first-order optimisation for the NN subproblem--so the model remains modular, the number of integer variables never grows with network depth, and the per-iteration cost scales only linearly with the NN size. LIB benchmark, our method proves scalable, modular, and adaptable: it runs 120 faster than an exact Big-M formulation on the largest test case; the NN sub-solver can be swapped from a log-barrier interior step to a projected-gradient routine with no code changes and identical objective value; and swapping the MLP for an LSTM backbone still completes the full optimisation in 47s without any bespoke adaptation. Intelligent decision systems increasingly integrate neural networks into decision-making and optimization pipelines [1-3].
arXiv.org Artificial Intelligence
Nov-13-2025
- Country:
- Africa > Middle East
- Egypt (0.04)
- Asia > China
- Anhui Province > Hefei (0.04)
- North America > United States
- New York > Erie County > Buffalo (0.04)
- Africa > Middle East
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine (0.46)
- Technology: