Goto

Collaborating Authors

 Bossek, Jakob


Computing Diverse Sets of Solutions for Monotone Submodular Optimisation Problems

arXiv.org Artificial Intelligence

Submodular functions allow to model many real-world optimisation problems. This paper introduces approaches for computing diverse sets of high quality solutions for submodular optimisation problems. We first present diversifying greedy sampling approaches and analyse them with respect to the diversity measured by entropy and the approximation quality of the obtained solutions. Afterwards, we introduce an evolutionary diversity optimisation approach to further improve diversity of the set of solutions. We carry out experimental investigations on popular submodular benchmark functions that show that the combined approaches achieve high quality solutions of large diversity.


Deep Learning as a Competitive Feature-Free Approach for Automated Algorithm Selection on the Traveling Salesperson Problem

arXiv.org Machine Learning

The Traveling Salesperson Problem (TSP) is a classical N P-hard optimization problem of utmost relevance, e.g., in transportation logistics, bioinformatics or circuit board fabrication. The goal is to route a salesperson through a set of cities such that each city is visited exactly once and the tour is of minimal length. In the past decades tremendous progress has been made in the development of high-performing heuristic TSP solvers. The local search-based Lin-Kernigham Heuristic (LKH) [14] and the genetic algorithm Edge-Assembly-Crossover (EAX) [35], along with their respective restart versions introduced in Kotthoff et al. [25], undeniably pose the state-of-the-art in inexact TSP solving. Automated Algorithm Selection (AS), originally proposed by Rice [39] back in 1976, is a powerful framework to predict the best-performing solver(s) from a portfolio of candidate solvers by means of machine learning. It has been successfully applied to a wide spectrum of challenging optimization problems in both the combinatorial [24, 29, 30, 40, 48] and continuous domain [21, 4] with partly astonishing performance gains - see the recent survey by Kerschke et al. [19] for a comprehensive overview. In particular, the TSP was subject to several successful ASstudies [25, 20, 33, 34, 37] which exploited the complementary performance profiles of simple heuristics on the one hand and the state-of-the-art solvers LKH and EAX on classical TSP benchmark sets on the other hand.


Dynamic Bi-Objective Routing of Multiple Vehicles

arXiv.org Artificial Intelligence

Routing of multiple vehicles is an important and difficult problem with applications in the logistic domain [1], especially in the area of customer servicing [2]. In postal services, after-sales services, and in business to business delivery or pick up services one or more vehicles have to be efficiently routed towards customers. If customers can request services over time, the problem becomes dynamic: besides a set of fixed customers, new requests can appear at any point in time. Of course, it is desirable that as many customers as possible are serviced while the tour of any vehicle is kept short. However, it is usually infeasible (due to human resources, labor regulations, or other constraints) to service all customer requests. And clearly, the less customers are left unserviced, the longer the tours become. Thus, the problem is inherently multi-objective. Any efficient solution (smallest maximum tour across all vehicles) is a compromise between the desire to service as many customers as possible (e.g.


Anytime Behavior of Inexact TSP Solvers and Perspectives for Automated Algorithm Selection

arXiv.org Artificial Intelligence

The Traveling-Salesperson-Problem (TSP) is arguably one of the best-known NP-hard combinatorial optimization problems. The two sophisticated heuristic solvers LKH and EAX and respective (restart) variants manage to calculate close-to optimal or even optimal solutions, also for large instances with several thousand nodes in reasonable time. In this work we extend existing benchmarking studies by addressing anytime behaviour of inexact TSP solvers based on empirical runtime distributions leading to an increased understanding of solver behaviour and the respective relation to problem hardness. It turns out that performance ranking of solvers is highly dependent on the focused approximation quality. Insights on intersection points of performances offer huge potential for the construction of hybridized solvers depending on instance features. Moreover, instance features tailored to anytime performance and corresponding performance indicators will highly improve automated algorithm selection models by including comprehensive information on solver quality.


OpenML: An R Package to Connect to the Machine Learning Platform OpenML

arXiv.org Machine Learning

OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its usage in combination with the machine learning R package mlr. We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks. Furthermore, we also show how to upload results of experiments, share them with others and download results from other users. Beyond ensuring reproducibility of results, the OpenML platform automates much of the drudge work, speeds up research, facilitates collaboration and increases the users' visibility online.


mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions

arXiv.org Machine Learning

We present mlrMBO, a flexible and comprehensive R toolbox for model-based optimization (MBO), also known as Bayesian optimization, which addresses the problem of expensive black-box optimization by approximating the given objective function through a surrogate regression model. It is designed for both single- and multi-objective optimization with mixed continuous, categorical and conditional parameters. Additional features include multi-point batch proposal, parallelization, visualization, logging and error-handling. mlrMBO is implemented in a modular fashion, such that single components can be easily replaced or adapted by the user for specific use cases, e.g., any regression learner from the mlr toolbox for machine learning can be used, and infill criteria and infill optimizers are easily exchangeable. We empirically demonstrate that mlrMBO provides state-of-the-art performance by comparing it on different benchmark scenarios against a wide range of other optimizers, including DiceOptim, rBayesianOptimization, SPOT, SMAC, Spearmint, and Hyperopt.