Explainable Benchmarking for Iterative Optimization Heuristics

van Stein, Niki, Vermetten, Diederick, Kononova, Anna V., Bäck, Thomas

arXiv.org Artificial Intelligence 

Traditional benchmarking methods are often used to evaluate algorithms in isolation, with a single algorithm configuration (hyper-parameter setting) or with a limited set of a few variations against a limited set of state-of-the-art algorithms, leading to limited insights into their comparative performance and practical applicability. This study addresses these challenges by employing modular optimization approaches and explainable AI techniques in order to derive insights into the algorithmic behaviour of a large set of algorithm components (modules) and their hyper-parameters. Modular optimization frameworks allow for the comparison of various modifications on a core algorithm, facilitating a deeper understanding of each component's influence on the algorithm's performance in different scenarios. There is already a wide variety of modular algorithm frameworks available, but their application for explicit explainability of the various algorithmic components and settings has been relatively unexplored. This paper aims to bridge this gap by providing a comprehensive framework for explainable benchmarking in iterative optimization heuristics and by providing a software library (IOH-Xplainer) to facilitate researchers to use the proposed framework.