Goto

Collaborating Authors

 nonlinear constraint




An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Neural Information Processing Systems

We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of our method subject to a verifiable geometric condition, which is closely related to the Polyak-Lojasiewicz and Mangasarian-Fromowitz conditions. In particular, when a first-order solver is used for the inner iterates, we prove that iALM finds a first-order stationary point with $\tilde{\mathcal{O}}(1/\epsilon^3)$ calls to the first-order oracle.




ENFORCE: Exact Nonlinear Constrained Learning with Adaptive-depth Neural Projection

Lastrucci, Giacomo, Schweidtmann, Artur M.

arXiv.org Artificial Intelligence

Ensuring neural networks adhere to domain-specific constraints is crucial for addressing safety and ethical concerns while also enhancing prediction accuracy. Despite the nonlinear nature of most real-world tasks, existing methods are predominantly limited to affine or convex constraints. We introduce ENFORCE, a neural network architecture that guarantees predictions to satisfy nonlinear constraints exactly. ENFORCE is trained with standard unconstrained gradient-based optimizers (e.g., Adam) and leverages autodifferentiation and local neural projections to enforce any $\mathcal{C}^1$ constraint to arbitrary tolerance $\epsilon$. We build an adaptive-depth neural projection (AdaNP) module that dynamically adjusts its complexity to suit the specific problem and the required tolerance levels. ENFORCE guarantees satisfaction of equality constraints that are nonlinear in both inputs and outputs of the neural network with minimal (and adjustable) computational cost.


Picard-KKT-hPINN: Enforcing Nonlinear Enthalpy Balances for Physically Consistent Neural Networks

Lastrucci, Giacomo, Karia, Tanuj, Gromotka, Zoë, Schweidtmann, Artur M.

arXiv.org Artificial Intelligence

Surrogate modeling plays a crucial role in simplifying and approximating complex physical models, making them suitable for large-scale simulations and optimization studies of industrial relevance. Machine learning models, such as neural networks (NNs), are particularly well-suited for this purpose due to their simplicity and strong regression capabilities [1]. However, despite exceptional advancements in machine learning, issues and skepticism regarding the black-box nature and physical inconsistency of these models hinder the adoption of machine learning-based tools (and, more broadly, artificial intelligence) in industrial applications [2, 3]. To mitigate this limitation, significant research has been carried out to enforce known mechanistic relationships between inputs and predictions in NNs. Soft-constrained neural networks represent an approach in which physical equations are included as penalty terms in the loss function [4, 5].


Reviews: An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Neural Information Processing Systems

Applying ALM to the Burer-Monterio problem and to nonlinear programs in general is natural and well summarized in the monograph Ref [8]. Allowing first-order and second-order approximate solvers for the primal subproblems is also classic, and can be found in, e.g., Ch 8 & 9 of Ref [8]. I think the main novelties here lie at the nonsmooth, convex term g(x) and the convergence rate results. Sec 5 of the paper has provided a comprehensive review of pertinent results under different assumptions. I have several concerns that I hope the authors can address: * The BM example does not quite justify the inclusion of the possibly nonsmooth term g in (1). The authors may want to balance out and briefly discuss other examples as appearing in the experiments.


Reviews: An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Neural Information Processing Systems

This paper uses the Augmented Lagrangian method to solve optimization problems for a sum of functions f and g, where f is nonconvex and g is convex but'proximal-friendly' subject to quite general nonlinear constraints. The proposed method solves the primal problem within some error epsilon_k that is gradually decreased as a penalty schedule beta_k is increasing across iterations. The approximate intermediate problems are solved using first order and second order solvers. The proposed analysis is technically non-trivial and interesting. The presentation of the paper was poor and at times confusing which made this a borderline paper.


Intelligent Mobility System with Integrated Motion Planning and Control Utilizing Infrastructure Sensor Nodes

Yang, Yufeng, Ning, Minghao, Huang, Shucheng, Hashemi, Ehsan, Khajepour, Amir

arXiv.org Artificial Intelligence

This paper introduces a framework for an indoor autonomous mobility system that can perform patient transfers and materials handling. Unlike traditional systems that rely on onboard perception sensors, the proposed approach leverages a global perception and localization (PL) through Infrastructure Sensor Nodes (ISNs) and cloud computing technology. Using the global PL, an integrated Model Predictive Control (MPC)-based local planning and tracking controller augmented with Artificial Potential Field (APF) is developed, enabling reliable and efficient motion planning and obstacle avoidance ability while tracking predefined reference motions. Simulation results demonstrate the effectiveness of the proposed MPC controller in smoothly navigating around both static and dynamic obstacles. The proposed system has the potential to extend to intelligent connected autonomous vehicles, such as electric or cargo transport vehicles with four-wheel independent drive/steering (4WID-4WIS) configurations.