gurobi
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (11 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.83)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.46)
AGeneralLargeNeighborhoodSearchFramework forSolvingIntegerLinearPrograms
We focus on solving integer linear programs, and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed. The appeal of LNS is that it can easily use any existing solver as a subroutine, and thus can inherit the benefits of carefully engineered heuristic or complete approaches and their software implementations.
- North America > United States (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > India (0.04)
Enforcing Hard Linear Constraints in Deep Learning Models with Decision Rules
Constante-Flores, Gonzalo E., Chen, Hao, Li, Can
Deep learning models are increasingly deployed in safety-critical tasks where predictions must satisfy hard constraints, such as physical laws, fairness requirements, or safety limits. However, standard architectures lack built-in mechanisms to enforce such constraints, and existing approaches based on regularization or projection are often limited to simple constraints, computationally expensive, or lack feasibility guarantees. This paper proposes a model-agnostic framework for enforcing input-dependent linear equality and inequality constraints on neural network outputs. The architecture combines a task network trained for prediction accuracy with a safe network trained using decision rules from the stochastic and robust optimization literature to ensure feasibility across the entire input space. The final prediction is a convex combination of the two subnetworks, guaranteeing constraint satisfaction during both training and inference without iterative procedures or runtime optimization. We prove that the architecture is a universal approximator of constrained functions and derive computationally tractable formulations based on linear decision rules. Empirical results on benchmark regression tasks show that our method consistently satisfies constraints while maintaining competitive accuracy and low inference latency.
CoCo-MILP: Inter-Variable Contrastive and Intra-Constraint Competitive MILP Solution Prediction
Pu, Tianle, Li, Jianing, Gao, Yingying, Liu, Shixuan, Geng, Zijie, Liu, Haoyang, Chen, Chao, Fan, Changjun
Mixed-Integer Linear Programming (MILP) is a cornerstone of combinatorial optimization, yet solving large-scale instances remains a significant computational challenge. Recently, Graph Neural Networks (GNNs) have shown promise in accelerating MILP solvers by predicting high-quality solutions. However, we identify that existing methods misalign with the intrinsic structure of MILP problems at two levels. At the leaning objective level, the Binary Cross-Entropy (BCE) loss treats variables independently, neglecting their relative priority and yielding plausible logits. At the model architecture level, standard GNN message passing inherently smooths the representations across variables, missing the natural competitive relationships within constraints. To address these challenges, we propose CoCo-MILP, which explicitly models inter-variable Contrast and intra-constraint Competition for advanced MILP solution prediction. At the objective level, CoCo-MILP introduces the Inter-Variable Contrastive Loss (VCL), which explicitly maximizes the embedding margin between variables assigned one versus zero. At the architectural level, we design an Intra-Constraint Competitive GNN layer that, instead of homogenizing features, learns to differentiate representations of competing variables within a constraint, capturing their exclusionary nature. Experimental results on standard benchmarks demonstrate that CoCo-MILP significantly outperforms existing learning-based approaches, reducing the solution gap by up to 68.12% compared to traditional solvers. Our code is available at https://github.com/happypu326/CoCo-MILP.
Machine Learning Guided Optimal Transmission Switching to Mitigate Wildfire Ignition Risk
Huang, Weimin, Piansky, Ryan, Dilkina, Bistra, Molzahn, Daniel K.
Abstract--T o mitigate acute wildfire ignition risks, utilities de-energize power lines in high-risk areas. The Optimal Power Shut-off (OPS) problem optimizes line energization statuses to manage wildfire ignition risks through de-energizations while reducing load shedding. OPS problems are computationally challenging Mixed-Integer Linear Programs (MILPs) that must be solved rapidly and frequently in operational settings. For a particular power system, OPS instances share a common structure with varying parameters related to wildfire risks, loads, and renewable generation. This motivates the use of Machine Learning (ML) for solving OPS problems by exploiting shared patterns across instances. In this paper, we develop an ML-guided framework that quickly produces high-quality de-energization decisions by extending existing ML-guided MILP solution methods while integrating domain knowledge on the number of energized and de-energized lines. Results on a large-scale realistic California-based synthetic test system show that the proposed ML-guided method produces high-quality solutions faster than traditional optimization methods.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Hawaii (0.04)
- North America > United States > Wisconsin > Brown County > Howard (0.04)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (0.69)
- Energy > Power Industry > Utilities (0.48)
A Neural Network Framework for Discovering Closed-form Solutions to Quadratic Programs with Linear Constraints
Beylunioglu, Fuat Can, Duimering, P. Robert, Pirnia, Mehrdad
Deep neural networks (DNNs) have been used to model complex optimization problems in many applications, yet have difficulty guaranteeing solution optimality and feasibility, despite training on large datasets. Training a NN as a surrogate optimization solver amounts to estimating a global solution function that maps varying problem input parameters to the corresponding optimal solutions. Work in multiparametric programming (mp) has shown that solutions to quadratic programs (QP) are piece-wise linear functions of the parameters, and researchers have suggested leveraging this property to model mp-QP using NN with ReLU activation functions, which also exhibit piecewise linear behaviour. This paper proposes a NN modeling approach and learning algorithm that discovers the exact closed-form solution to QP with linear constraints, by analytically deriving NN model parameters directly from the problem coefficients without training. Whereas generic DNN cannot guarantee accuracy outside the training distribution, the closed-form NN model produces exact solutions for every discovered critical region of the solution function. To evaluate the closed-form NN model, it was applied to DC optimal power flow problems in electricity management. In terms of Karush-Kuhn-Tucker (KKT) optimality and feasibility of solutions, it outperformed a classically trained DNN and was competitive with, or outperformed, a commercial analytic solver (Gurobi) at far less computational cost. For a long-range energy planning problem, it was able to produce optimal and feasible solutions for millions of input parameters within seconds.
Reviewer
Re "...how decomposing the polytope now allows it to be mapped?" If you meant "how does the decomposition help map the problem of computing an optimal correlated Re "I wasn't sure what I was supposed to take away from the experiments" As We'll take all of them into account. Re "broader impact" Thanks for the feedback, we agree with all your points. As you correctly recognized, we use the term "social welfare" to mean the sum of utilities of the players as is typical in the game The maximum payoff is 15. Gurobi is freely available for academic use, but we'll also mention the open-source We are definitely the first to compute optimal EFCE in it. We strongly disagree that " this paper just tells us that the work in Farina et al. [12] is Extending the construction by Farina et al. to handle the more general We strongly disagree with that.