Goto

Collaborating Authors

 Kerschke, Pascal


Dancing to the State of the Art? How Candidate Lists Influence LKH for Solving the Traveling Salesperson Problem

arXiv.org Artificial Intelligence

Solving the Traveling Salesperson Problem (TSP) remains a persistent challenge, despite its fundamental role in numerous generalized applications in modern contexts. Heuristic solvers address the demand for finding high-quality solutions efficiently. Among these solvers, the Lin-Kernighan-Helsgaun (LKH) heuristic stands out, as it complements the performance of genetic algorithms across a diverse range of problem instances. However, frequent timeouts on challenging instances hinder the practical applicability of the solver. Within this work, we investigate a previously overlooked factor contributing to many timeouts: The use of a fixed candidate set based on a tree structure. Our investigations reveal that candidate sets based on Hamiltonian circuits contain more optimal edges. We thus propose to integrate this promising initialization strategy, in the form of POPMUSIC, within an efficient restart version of LKH. As confirmed by our experimental studies, this refined TSP heuristic is much more efficient - causing fewer timeouts and improving the performance (in terms of penalized average runtime) by an order of magnitude - and thereby challenges the state of the art in TSP solving.


Reinvestigating the R2 Indicator: Achieving Pareto Compliance by Integration

arXiv.org Artificial Intelligence

In multi-objective optimization, set-based quality indicators are a cornerstone of benchmarking and performance assessment. They capture the quality of a set of trade-off solutions by reducing it to a scalar number. One of the most commonly used set-based metrics is the R2 indicator, which describes the expected utility of a solution set to a decision-maker under a distribution of utility functions. Typically, this indicator is applied by discretizing this distribution of utility functions, yielding a weakly Pareto-compliant indicator. In consequence, adding a nondominated or dominating solution to a solution set may - but does not have to - improve the indicator's value. In this paper, we reinvestigate the R2 indicator under the premise that we have a continuous, uniform distribution of (Tchebycheff) utility functions. We analyze its properties in detail, demonstrating that this continuous variant is indeed Pareto-compliant - that is, any beneficial solution will improve the metric's value. Additionally, we provide an efficient computational procedure to compute this metric for bi-objective problems in $\mathcal O (N \log N)$. As a result, this work contributes to the state-of-the-art Pareto-compliant unary performance metrics, such as the hypervolume indicator, offering an efficient and promising alternative.


Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multi-Objective Continuous Optimization Problems

arXiv.org Artificial Intelligence

In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize, in particular, single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks on continuous optimization problems, ranging, i.a., from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems would be impossible. Yet, despite their undisputed usefulness, ELA features suffer from several drawbacks. These include, in particular, (1.) a strong correlation between multiple features, as well as (2.) its very limited applicability to multi-objective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, e.g., point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. Specifically, we pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multi-objective optimization problems. Our proposed framework can either be used out-of-the-box for analyzing single- and multi-objective continuous optimization problems, or subsequently fine-tuned to various tasks focussing on algorithm behavior and problem understanding.


Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games

arXiv.org Artificial Intelligence

The term Procedural Content Generation (PCG) refers to the (semi-)automatic generation of game content by algorithmic means, and its methods are becoming increasingly popular in game-oriented research and industry. A special class of these methods, which is commonly known as search-based PCG, treats the given task as an optimisation problem. Such problems are predominantly tackled by evolutionary algorithms. We will demonstrate in this paper that obtaining more information about the defined optimisation problem can substantially improve our understanding of how to approach the generation of content. To do so, we present and discuss three efficient analysis tools, namely diagonal walks, the estimation of high-level properties, as well as problem similarity measures. We discuss the purpose of each of the considered methods in the context of PCG and provide guidelines for the interpretation of the results received. This way we aim to provide methods for the comparison of PCG approaches and eventually, increase the quality and practicality of generated content in industry.


Mixture of Decision Trees for Interpretable Machine Learning

arXiv.org Artificial Intelligence

This work introduces a novel interpretable machine learning method called Mixture of Decision Trees (MoDT). It constitutes a special case of the Mixture of Experts ensemble architecture, which utilizes a linear model as gating function and decision trees as experts. Our proposed method is ideally suited for problems that cannot be satisfactorily learned by a single decision tree, but which can alternatively be divided into subproblems. Each subproblem can then be learned well from a single decision tree. Therefore, MoDT can be considered as a method that improves performance while maintaining interpretability by making each of its decisions understandable and traceable to humans. Our work is accompanied by a Python implementation, which uses an interpretable gating function, a fast learning algorithm, and a direct interface to fine-tuned interpretable visualization methods. The experiments confirm that the implementation works and, more importantly, show the superiority of our approach compared to single decision trees and random forests of similar complexity.


HPO X ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis

arXiv.org Artificial Intelligence

Hyperparameter optimization (HPO) is a key component of machine learning models for achieving peak predictive performance. While numerous methods and algorithms for HPO have been proposed over the last years, little progress has been made in illuminating and examining the actual structure of these black-box optimization problems. Exploratory landscape analysis (ELA) subsumes a set of techniques that can be used to gain knowledge about properties of unknown optimization problems. In this paper, we evaluate the performance of five different black-box optimizers on 30 HPO problems, which consist of two-, three- and five-dimensional continuous search spaces of the XGBoost learner trained on 10 different data sets. This is contrasted with the performance of the same optimizers evaluated on 360 problem instances from the black-box optimization benchmark (BBOB). We then compute ELA features on the HPO and BBOB problems and examine similarities and differences. A cluster analysis of the HPO and BBOB problems in ELA feature space allows us to identify how the HPO problems compare to the BBOB problems on a structural meta-level. We identify a subset of BBOB problems that are close to the HPO problems in ELA feature space and show that optimizer performance is comparably similar on these two sets of benchmark problems. We highlight open challenges of ELA for HPO and discuss potential directions of future research and applications.


Multiobjectivization of Local Search: Single-Objective Optimization Benefits From Multi-Objective Gradient Descent

arXiv.org Artificial Intelligence

Multimodality is one of the biggest difficulties for optimization as local optima are often preventing algorithms from making progress. This does not only challenge local strategies that can get stuck. It also hinders meta-heuristics like evolutionary algorithms in convergence to the global optimum. In this paper we present a new concept of gradient descent, which is able to escape local traps. It relies on multiobjectivization of the original problem and applies the recently proposed and here slightly modified multi-objective local search mechanism MOGSA. We use a sophisticated visualization technique for multi-objective problems to prove the working principle of our idea. As such, this work highlights the transfer of new insights from the multi-objective to the single-objective domain and provides first visual evidence that multiobjectivization can link single-objective local optima in multimodal landscapes.


Deep Learning as a Competitive Feature-Free Approach for Automated Algorithm Selection on the Traveling Salesperson Problem

arXiv.org Machine Learning

The Traveling Salesperson Problem (TSP) is a classical N P-hard optimization problem of utmost relevance, e.g., in transportation logistics, bioinformatics or circuit board fabrication. The goal is to route a salesperson through a set of cities such that each city is visited exactly once and the tour is of minimal length. In the past decades tremendous progress has been made in the development of high-performing heuristic TSP solvers. The local search-based Lin-Kernigham Heuristic (LKH) [14] and the genetic algorithm Edge-Assembly-Crossover (EAX) [35], along with their respective restart versions introduced in Kotthoff et al. [25], undeniably pose the state-of-the-art in inexact TSP solving. Automated Algorithm Selection (AS), originally proposed by Rice [39] back in 1976, is a powerful framework to predict the best-performing solver(s) from a portfolio of candidate solvers by means of machine learning. It has been successfully applied to a wide spectrum of challenging optimization problems in both the combinatorial [24, 29, 30, 40, 48] and continuous domain [21, 4] with partly astonishing performance gains - see the recent survey by Kerschke et al. [19] for a comprehensive overview. In particular, the TSP was subject to several successful ASstudies [25, 20, 33, 34, 37] which exploited the complementary performance profiles of simple heuristics on the one hand and the state-of-the-art solvers LKH and EAX on classical TSP benchmark sets on the other hand.


Empirical Study on the Benefits of Multiobjectivization for Solving Single-Objective Problems

arXiv.org Artificial Intelligence

Whether it is in the field of production, logistics, in medicine or biology; everywhere the global optimal solution or the set of global optimal solutions is sought. However, most real-world problems are of nonlinear nature and naturally multimodal which poses severe problems to global optimization. Multimodality, the existence of multiple (local) optima, is regarded as one of the biggest challenges for continuous single-objective problems [23]. A lot of algorithms get stuck searching for the global optimum or are requiring many function evaluations to escape local optima. One of the most popular strategies for dealing with multimodal problems are population-based methods like evolutionary algorithms due to their global search abilities [2]. In this paper we will examine another approach of coping with local traps, namely multiobjectivization. By transforming a single-objective into a multi-objective problem, we aim at exploiting the properties of multi-objective landscapes. So far, the characteristics of single-objective optimization problems have often been directly transferred to the multiobjective domain.


Anytime Behavior of Inexact TSP Solvers and Perspectives for Automated Algorithm Selection

arXiv.org Artificial Intelligence

The Traveling-Salesperson-Problem (TSP) is arguably one of the best-known NP-hard combinatorial optimization problems. The two sophisticated heuristic solvers LKH and EAX and respective (restart) variants manage to calculate close-to optimal or even optimal solutions, also for large instances with several thousand nodes in reasonable time. In this work we extend existing benchmarking studies by addressing anytime behaviour of inexact TSP solvers based on empirical runtime distributions leading to an increased understanding of solver behaviour and the respective relation to problem hardness. It turns out that performance ranking of solvers is highly dependent on the focused approximation quality. Insights on intersection points of performances offer huge potential for the construction of hybridized solvers depending on instance features. Moreover, instance features tailored to anytime performance and corresponding performance indicators will highly improve automated algorithm selection models by including comprehensive information on solver quality.