Neumann, Frank
Evolving Hard Maximum Cut Instances for Quantum Approximate Optimization Algorithms
Pan, Shuaiqun, Patel, Yash J., Neumann, Aneta, Neumann, Frank, Bäck, Thomas, Wang, Hao
Variational quantum algorithms, such as the Recursive Quantum Approximate Optimization Algorithm (RQAOA), have become increasingly popular, offering promising avenues for employing Noisy Intermediate-Scale Quantum devices to address challenging combinatorial optimization tasks like the maximum cut problem. In this study, we utilize an evolutionary algorithm equipped with a unique fitness function. This approach targets hard maximum cut instances within the latent space of a Graph Autoencoder, identifying those that pose significant challenges or are particularly tractable for RQAOA, in contrast to the classic Goemans and Williamson algorithm. Our findings not only delineate the distinct capabilities and limitations of each algorithm but also expand our understanding of RQAOA's operational limits. Furthermore, the diverse set of graphs we have generated serves as a crucial benchmarking asset, emphasizing the need for more advanced algorithms to tackle combinatorial optimization challenges. Additionally, our results pave the way for new avenues in graph generation research, offering exciting opportunities for future explorations.
Theoretical Analysis of Quality Diversity Algorithms for a Classical Path Planning Problem
Dang, Duc-Cuong, Neumann, Aneta, Neumann, Frank, Opris, Andre, Sudholt, Dirk
In recent years, computing diverse sets of high quality solutions for combinatorial optimisation problems has gained significant attention in the area of artificial intelligence from both theoretical (Baste et al., 2022, 2019; Fomin et al., 2024; Hanaka et al., 2023) and experimental (Vonásek and Saska, 2018; Ingmar et al., 2020) perspectives. Prominent examples where diverse sets of high quality solutions are sought come from the area of path planning (Hanaka et al., 2021; Gao et al., 2022). Particularly, quality diversity (QD) algorithms have shown to produce excellent results for challenging problems in the areas such as robotics (Miao et al., 2022; Shen et al., 2020), games (Cully and Demiris, 2018) and combinatorial optimisation (Nikfarjam et al., 2024a). This work contributes to the theoretical understanding of QD algorithms. Such algorithms compute several solutions that occupy different areas of a so-called behavioural space. Approaches that use a multidimensional archive of phenotypic elites, called Map-Elites (Mouret and Clune, 2015), are among the most commonly used QD algorithms.
Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multi-Objective Algorithms
Neumann, Aneta, Neumann, Frank
Many real-world optimization problems can be stated in terms of submodular functions. Furthermore, these real-world problems often involve uncertainties which may lead to the violation of given constraints. A lot of evolutionary multi-objective algorithms following the Pareto optimization approach have recently been analyzed and applied to submodular problems with different types of constraints. We present a first runtime analysis of evolutionary multi-objective algorithms based on Pareto optimization for chance-constrained submodular functions. Here the constraint involves stochastic components and the constraint can only be violated with a small probability of alpha. We investigate the classical GSEMO algorithm for two different bi-objective formulations using tail bounds to determine the feasibility of solutions. We show that the algorithm GSEMO obtains the same worst case performance guarantees for monotone submodular functions as recently analyzed greedy algorithms for the case of uniform IID weights and uniformly distributed weights with the same dispersion when using the appropriate bi-objective formulation. As part of our investigations, we also point out situations where the use of tail bounds in the first bi-objective formulation can prevent GSEMO from obtaining good solutions in the case of uniformly distributed weights with the same dispersion if the objective function is submodular but non-monotone due to a single element impacting monotonicity. Furthermore, we investigate the behavior of the evolutionary multi-objective algorithms GSEMO, NSGA-II and SPEA2 on different submodular chance-constrained network problems. Our experimental results show that the use of evolutionary multi-objective algorithms leads to significant performance improvements compared to state-of-the-art greedy algorithms for submodular optimization.
Local Optima in Diversity Optimization: Non-trivial Offspring Population is Essential
Antipov, Denis, Neumann, Aneta, Neumann, Frank
The main goal of diversity optimization is to find a diverse set of solutions which satisfy some lower bound on their fitness. Evolutionary algorithms (EAs) are often used for such tasks, since they are naturally designed to optimize populations of solutions. This approach to diversity optimization, called EDO, has been previously studied from theoretical perspective, but most studies considered only EAs with a trivial offspring population such as the $(\mu + 1)$ EA. In this paper we give an example instance of a $k$-vertex cover problem, which highlights a critical difference of the diversity optimization from the regular single-objective optimization, namely that there might be a locally optimal population from which we can escape only by replacing at least two individuals at once, which the $(\mu + 1)$ algorithms cannot do. We also show that the $(\mu + \lambda)$ EA with $\lambda \ge \mu$ can effectively find a diverse population on $k$-vertex cover, if using a mutation operator inspired by Branson and Sutton (TCS 2023). To avoid the problem of subset selection which arises in the $(\mu + \lambda)$ EA when it optimizes diversity, we also propose the $(1_\mu + 1_\mu)$ EA$_D$, which is an analogue of the $(1 + 1)$ EA for populations, and which is also efficient at optimizing diversity on the $k$-vertex cover problem.
Archive-based Single-Objective Evolutionary Algorithms for Submodular Optimization
Neumann, Frank, Rudolph, Günter
Many combinatorial optimization problems that face diminishing returns can be stated in terms of a submodular function under given set of constraints [7]. The maximization of a non-monotone submodular function even without constraints includes the classical maximum cut problem in graphs and is therefore an NP-hard combinatorial optimization problem that cannot be solved in polynomial time unless P = N P but different types of approximation algorithms are available [2]. Monotone submodular functions play a special role in the area of optimization as they capture import coverage and influence maximization problems in networks. The maximization of monotone submodular functions is NP-hard even for the case of simple constraint that limits the number of elements that can be chosen, but greedy algorithms have shown to obtain best possible approximation guarantees for different types of constraints [7, 8]. At best, one can hope to develop a method that can provide an α-approximation in polynomial time, i.e., a solution with a value of at least α f(x
Sliding Window 3-Objective Pareto Optimization for Problems with Chance Constraints
Neumann, Frank, Witt, Carsten
Multi-objective formulations have been widely used to solve single-objective optimization problems. The initial study carried out by Knowles et al. [8] for the H-IFF and the traveling salesperson problem shows that such formulations can significantly reduce the number of local optima in the search space and uses the term multi-objectivization for such approaches. Using multi-objective formulations to solve constrained single-objective optimization problems by evolutionary multi-objective optimization using the constraint as an additional objective has shown to be highly beneficial for a wide range of problems [4,9,12]. Using the constraint as an additional objective for such problems allows simple evolutionary multi-objective algorithms such as GSEMO mimic a greedy behaviour and as a consequence allows us to achieve theoretically best possible performance guarantees for a wide range of constrained submodular optimization problems [17-19]. Such approaches have been widely studied recently under the term Pareto optimization in the artificial intelligence and machine learning literature [22]. In the context of problems with stochastic constraints, it has recently been shown that 3-objective formulations where the given constraint is relaxed into a third objective lead to better performance than 2-objective formulations that optimize the expected value and variance of the given stochastic components under the given constraint [14, 15].
Sampling-based Pareto Optimization for Chance-constrained Monotone Submodular Problems
Yan, Xiankun, Neumann, Aneta, Neumann, Frank
Recently surrogate functions based on the tail inequalities were developed to evaluate the chance constraints in the context of evolutionary computation and several Pareto optimization algorithms using these surrogates were successfully applied in optimizing chance-constrained monotone submodular problems. However, the difference in performance between algorithms using the surrogates and those employing the direct sampling-based evaluation remains unclear. Within the paper, a sampling-based method is proposed to directly evaluate the chance constraint. Furthermore, to address the problems with more challenging settings, an enhanced GSEMO algorithm integrated with an adaptive sliding window, called ASW-GSEMO, is introduced. In the experiments, the ASW-GSEMO employing the sampling-based approach is tested on the chance-constrained version of the maximum coverage problem with different settings. Its results are compared with those from other algorithms using different surrogate functions. The experimental findings indicate that the ASW-GSEMO with the sampling-based evaluation approach outperforms other algorithms, highlighting that the performances of algorithms using different evaluation methods are comparable. Additionally, the behaviors of ASW-GSEMO are visualized to explain the distinctions between it and the algorithms utilizing the surrogate functions.
Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem
Antipov, Denis, Neumann, Aneta, Neumann, Frank, Sutton, Andrew M.
The diversity optimization is the class of optimization problems, in which we aim at finding a diverse set of good solutions. One of the frequently used approaches to solve such problems is to use evolutionary algorithms which evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyse EDO on a 3-objective function LOTZ$_k$, which is a modification of the 2-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in $O(kn^3)$ expected iterations. We also analyze the runtime of the GSEMO$_D$ (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures, the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMO$_D$ optimizes it asymptotically faster than it finds a Pareto-optimal population, in $O(kn^2\log(n))$ expected iterations, and for the second measure we show an upper bound of $O(k^2n^3\log(n))$ expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures that is close to the theory predictions.
Using 3-Objective Evolutionary Algorithms for the Dynamic Chance Constrained Knapsack Problem
Pathiranage, Ishara Hewa, Neumann, Frank, Antipov, Denis, Neumann, Aneta
Real-world optimization problems often involve stochastic and dynamic components. Evolutionary algorithms are particularly effective in these scenarios, as they can easily adapt to uncertain and changing environments but often uncertainty and dynamic changes are studied in isolation. In this paper, we explore the use of 3-objective evolutionary algorithms for the chance constrained knapsack problem with dynamic constraints. In our setting, the weights of the items are stochastic and the knapsack's capacity changes over time. We introduce a 3-objective formulation that is able to deal with the stochastic and dynamic components at the same time and is independent of the confidence level required for the constraint. This new approach is then compared to the 2-objective formulation which is limited to a single confidence level. We evaluate the approach using two different multi-objective evolutionary algorithms (MOEAs), namely the global simple evolutionary multi-objective optimizer (GSEMO) and the multi-objective evolutionary algorithm based on decomposition (MOEA/D), across various benchmark scenarios. Our analysis highlights the advantages of the 3-objective formulation over the 2-objective formulation in addressing the dynamic chance constrained knapsack problem.
Enhanced Genetic Programming Models with Multiple Equations for Accurate Semi-Autogenous Grinding Mill Throughput Prediction
Ghasemi, Zahra, Nesht, Mehdi, Aldrich, Chris, Karageorgos, John, Zanin, Max, Neumann, Frank, Chen, Lei
Semi-autogenous grinding (SAG) mills play a pivotal role in the grinding circuit of mineral processing plants. Accurate prediction of SAG mill throughput as a crucial performance metric is of utmost importance. The potential of applying genetic programming (GP) for this purpose has yet to be thoroughly investigated. This study introduces an enhanced GP approach entitled multi-equation GP (MEGP) for more accurate prediction of SAG mill throughput. In the new proposed method multiple equations, each accurately predicting mill throughput for specific clusters of training data are extracted. These equations are then employed to predict mill throughput for test data using various approaches. To assess the effect of distance measures, four different distance measures are employed in MEGP method. Comparative analysis reveals that the best MEGP approach achieves an average improvement of 10.74% in prediction accuracy compared with standard GP. In this approach, all extracted equations are utilized and both the number of data points in each data cluster and the distance to clusters are incorporated for calculating the final prediction. Further investigation of distance measures indicates that among four different metrics employed including Euclidean, Manhattan, Chebyshev, and Cosine distance, the Euclidean distance measure yields the most accurate results for the majority of data splits.