constrained machine learning
Constrained Machine Learning Through Hyperspherical Representation
Signorelli, Gaetano, Lombardi, Michele
The problem of ensuring constraints satisfaction on the output of machine learning models is critical for many applications, especially in safety-critical domains. Modern approaches rely on penalty-based methods at training time, which do not guarantee to avoid constraints violations; or constraint-specific model architectures (e.g., for monotonocity); or on output projection, which requires to solve an optimization problem that might be computationally demanding. We present the Hypersherical Constrained Representation, a novel method to enforce constraints in the output space for convex and bounded feasibility regions (generalizable to star domains). Our method operates on a different representation system, where Euclidean coordinates are converted into hyperspherical coordinates relative to the constrained region, which can only inherently represent feasible points. Experiments on a synthetic and a real-world dataset show that our method has predictive performance comparable to the other approaches, can guarantee 100% constraint satisfaction, and has a minimal computational cost at inference time.
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Europe > Belgium > Flanders > East Flanders > Ghent (0.04)
Constrained Machine Learning: The Bagel Framework
Perez, Guillaume, Ament, Sebastian, Gomes, Carla, Lallouet, Arnaud
Machine learning models are widely used for real-world applications, such as document analysis and vision. Constrained machine learning problems are problems where learned models have to both be accurate and respect constraints. For continuous convex constraints, many works have been proposed, but learning under combinatorial constraints is still a hard problem. The goal of this paper is to broaden the modeling capacity of constrained machine learning problems by incorporating existing work from combinatorial optimization. We propose first a general framework called BaGeL (Branch, Generate and Learn) which applies Branch and Bound to constrained learning problems where a learning problem is generated and trained at each node until only valid models are obtained. Because machine learning has specific requirements, we also propose an extended table constraint to split the space of hypotheses.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Europe > France (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Constraint-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
An Analysis of Regularized Approaches for Constrained Machine Learning
Lombardi, Michele, Baldo, Federico, Borghesi, Andrea, Milano, Michela
Regularization-based approaches for injecting constraints in Machine Learning (ML) were introduced (see e.g. Given the recent interest in ethical and trustworthy AI, however, several works are resorting to these approaches for enforcing desired properties over a ML model (e.g. The regularization function C denotes a vector of (nonnegative) constraint violation indices for m constraints, while λ 0 is a vector of weights (or multipliers). As an example, in a regression problem we may desire a specific output ordering for two input vectors in the training set. If n is even, the term is 0 for perfectly balanced classifications.