Goto

Collaborating Authors

 Goerigk, Marc


Towards Robust Interpretable Surrogates for Optimization

arXiv.org Artificial Intelligence

An important factor in the practical implementation of optimization models is the acceptance by the intended users. This is influenced among other factors by the interpretability of the solution process. Decision rules that meet this requirement can be generated using the framework for inherently interpretable optimization models. In practice, there is often uncertainty about the parameters of an optimization problem. An established way to deal with this challenge is the concept of robust optimization. The goal of our work is to combine both concepts: to create decision trees as surrogates for the optimization process that are more robust to perturbations and still inherently interpretable. For this purpose we present suitable models based on different variants to model uncertainty, and solution methods. Furthermore, the applicability of heuristic methods to perform this task is evaluated. Both approaches are compared with the existing framework for inherently interpretable optimization models.


A Framework for Data-Driven Explainability in Mathematical Optimization

arXiv.org Artificial Intelligence

Advancements in mathematical programming have made it possible to efficiently tackle large-scale real-world problems that were deemed intractable just a few decades ago. However, provably optimal solutions may not be accepted due to the perception of optimization software as a black box. Although well understood by scientists, this lacks easy accessibility for practitioners. Hence, we advocate for introducing the explainability of a solution as another evaluation criterion, next to its objective value, which enables us to find trade-off solutions between these two criteria. Explainability is attained by comparing against (not necessarily optimal) solutions that were implemented in similar situations in the past. Thus, solutions are preferred that exhibit similar features. Although we prove that already in simple cases the explainable model is NP-hard, we characterize relevant polynomially solvable cases such as the explainable shortest path problem. Our numerical experiments on both artificial as well as real-world road networks show the resulting Pareto front. It turns out that the cost of enforcing explainability can be very small.


A Framework for Inherently Interpretable Optimization Models

arXiv.org Artificial Intelligence

With dramatic improvements in optimization software, the solution of large-scale problems that seemed intractable decades ago are now a routine task. This puts even more real-world applications into the reach of optimizers. At the same time, solving optimization problems often turns out to be one of the smaller difficulties when putting solutions into practice. One major barrier is that the optimization software can be perceived as a black box, which may produce solutions of high quality, but can create completely different solutions when circumstances change leading to low acceptance of optimized solutions. Such issues of interpretability and explainability have seen significant attention in other areas, such as machine learning, but less so in optimization. In this paper we propose an optimization framework that inherently comes with an easily interpretable optimization rule, that explains under which circumstances certain solutions are chosen. Focusing on decision trees to represent interpretable optimization rules, we propose integer programming formulations as well as a heuristic method that ensure applicability of our approach even for large-scale problems. Computational experiments using random and real-world data indicate that the costs of inherent interpretability can be very small.


Data-driven Prediction of Relevant Scenarios for Robust Combinatorial Optimization

arXiv.org Artificial Intelligence

Optimization under uncertainty is an important research field especially due to its relevance in practical applications from operations research. In the real world many parameters of an optimization problem can be uncertain, e.g. the demands, returns or traffic situations or any other parameters which are not precisely known due to measurement or rounding errors. It was shown that hedging against possible perturbations in the problem parameters is essential, since already small perturbations can lead to a large violation of the constraints [BTEGN09]. Driven by the seminal works [Soy73, KY96, BTN98, BTN99, BS04] robust optimization evolved to be one of the most popular approaches to tackle uncertainty in optimization problems by finding solutions which are worst-case optimal and feasible for all parameters of a pre-defined uncertainty set; see [BBC11, BK18, GMT14] for a literature overview. Later the classical robust optimization approach was extended to the two-stage robust optimization approach (also called adaptive robust optimization) in [BTGGN04] which has been extensively studied from then on; see e.g.


Solving the Traveling Tournament Problem by Packing Three-Vertex Paths

AAAI Conferences

The Traveling Tournament Problem (TTP) is a complex problem in sports scheduling whose solution is a schedule of home and away games meeting specific feasibility requirements, while minimizing the total distance traveled by all the teams. A recently-developed "hybrid" algorithm, combining local search and integer programming, has resulted in best-known solutions for many TTP instances. In this paper, we tackle the TTP from a graph-theoretic perspective, by generating a new "canonical" schedule in which each team's three-game road trips match up with the underlying graph's minimum-weight P_3-packing. By using this new schedule as the initial input for the hybrid algorithm, we develop tournament schedules for five benchmark TTP instances that beat all previously-known solutions.