Evolutionary Systems
Contribution of each component in RPS-Net # Parameters vs Tasks RPS-Net vs iCARL for different #examplars Progressive Nets RPS-Net iCARL
We thank the reviewers for the constructive feedback. Code will be made public. Fig. (a, b, c) best viewed in zoom. R2.1: Difference from PathNet: Our RPS-Net is inspired by PathNet, yet there are notable differences: 1) Architecture: However, for our case i.e., 10+ tasks, PathNet is not feasible, due to a large number See R3.1 for comparison between random selection and genetic algorithms. R2.2: Impact of Varying Examplars: Fig (c) compares RPS-Net with the best existing method (iCARL) for various Our proposed RPS-Net consistently performs better across all budgets.
Automated and Risk-Aware Engine Control Calibration Using Constrained Bayesian Optimization
Vlaswinkel, Maarten, Antunes, Duarte, Willems, Frank
Decarbonization of the transport sector sets increasingly strict demands to maximize thermal efficiency and minimize greenhouse gas emissions of Internal Combustion Engines. This has led to complex engines with a surge in the number of corresponding tunable parameters in actuator set points and control settings. Automated calibration is therefore essential to keep development time and costs at acceptable levels. In this work, an innovative self-learning calibration method is presented based on in-cylinder pressure curve shaping. This method combines Principal Component Decomposition with constrained Bayesian Optimization. To realize maximal thermal engine efficiency, the optimization problem aims at minimizing the difference between the actual in-cylinder pressure curve and an Idealized Thermodynamic Cycle. By continuously updating a Gaussian Process Regression model of the pressure's Principal Components weights using measurements of the actual operating conditions, the mean in-cylinder pressure curve as well as its uncertainty bounds are learned. This information drives the optimization of calibration parameters, which are automatically adapted while dealing with the risks and uncertainties associated with operational safety and combustion stability. This data-driven method does not require prior knowledge of the system. The proposed method is successfully demonstrated in simulation using a Reactivity Controlled Compression Ignition engine model. The difference between the Gross Indicated Efficiency of the optimal solution found and the true optimum is 0.017%. For this complex engine, the optimal solution was found after 64.4s, which is relatively fast compared to conventional calibration methods.
Fuzzy-Logic-based model predictive control: A paradigm integrating optimal and common-sense decision making
Surma, Filip, Jamshidnejad, Anahita
This paper introduces a novel concept, fuzzy-logic-based model predictive control (FLMPC), along with a multi-robot control approach for exploring unknown environments and locating targets. Traditional model predictive control (MPC) methods rely on Bayesian theory to represent environmental knowledge and optimize a stochastic cost function, often leading to high computational costs and lack of effectiveness in locating all the targets. Our approach instead leverages FLMPC and extends it to a bi-level parent-child architecture for enhanced coordination and extended decision making horizon. Extracting high-level information from probability distributions and local observations, FLMPC simplifies the optimization problem and significantly extends its operational horizon compared to other MPC methods. We conducted extensive simulations in unknown 2-dimensional environments with randomly placed obstacles and humans. We compared the performance and computation time of FLMPC against MPC with a stochastic cost function, then evaluated the impact of integrating the high-level parent FLMPC layer. The results indicate that our approaches significantly improve both performance and computation time, enhancing coordination of robots and reducing the impact of uncertainty in large-scale search and rescue environments.
Optimal Parameter Adaptation for Safety-Critical Control via Safe Barrier Bayesian Optimization
Wang, Shengbo, Li, Ke, Yan, Zheng, Guo, Zhenyuan, Zhu, Song, Wen, Guanghui, Wen, Shiping
Safety is of paramount importance in control systems to avoid costly risks and catastrophic damages. The control barrier function (CBF) method, a promising solution for safety-critical control, poses a new challenge of enhancing control performance due to its direct modification of original control design and the introduction of uncalibrated parameters. In this work, we shed light on the crucial role of configurable parameters in the CBF method for performance enhancement with a systematical categorization. Based on that, we propose a novel framework combining the CBF method with Bayesian optimization (BO) to optimize the safe control performance. Considering feasibility/safety-critical constraints, we develop a safe version of BO using the barrier-based interior method to efficiently search for promising feasible configurable parameters. Furthermore, we provide theoretical criteria of our framework regarding safety and optimality. An essential advantage of our framework lies in that it can work in model-agnostic environments, leaving sufficient flexibility in designing objective and constraint functions. Finally, simulation experiments on swing-up control and high-fidelity adaptive cruise control are conducted to demonstrate the effectiveness of our framework.
A Systematic Review of EEG-based Machine Intelligence Algorithms for Depression Diagnosis, and Monitoring
Nassibi, Amir, Papavassiliou, Christos, Rakhmatulin, Ildar, Mandic, Danilo, Atashzar, S. Farokh
Depression disorder is a serious health condition that has affected the lives of millions of people around the world. Diagnosis of depression is a challenging practice that relies heavily on subjective studies and, in most cases, suffers from late findings. Electroencephalography (EEG) biomarkers have been suggested and investigated in recent years as a potential transformative objective practice. In this article, for the first time, a detailed systematic review of EEG-based depression diagnosis approaches is conducted using advanced machine learning techniques and statistical analyses. For this, 938 potentially relevant articles (since 1985) were initially detected and filtered into 139 relevant articles based on the review scheme 'preferred reporting items for systematic reviews and meta-analyses (PRISMA).' This article compares and discusses the selected articles and categorizes them according to the type of machine learning techniques and statistical analyses. Algorithms, preprocessing techniques, extracted features, and data acquisition systems are discussed and summarized. This review paper explains the existing challenges of the current algorithms and sheds light on the future direction of the field. This systematic review outlines the issues and challenges in machine intelligence for the diagnosis of EEG depression that can be addressed in future studies and possibly in future wearable technologies.
Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation
Zhang, Octi, Peng, Quanquan, Scalise, Rosario, Boots, Bryon
Developing robotic agents that can generalize across diverse environments while continually evolving their behaviors is a core challenge in AI and robotics. The difficulties lie in solving increasingly complex tasks and ensuring agents can continue learning without converging on narrow, specialized solutions. Quality Diversity (QD) [1, 2] methods effectively foster diversity but often rely on trial and error, where the path to a final solution can be convoluted, leading to inefficiencies and uncertainty. Our approach draws inspiration from nature's inheritance process, where offspring not only receive but also build upon the knowledge of their predecessors. Similarly, our agents inherit distilled behaviors from previous generations, allowing them to adapt and continue learning efficiently, eventually surpassing their predecessors. This natural knowledge transfer reduces randomness, guiding exploration toward more meaningful learning without manual intervention like reward shaping or task descriptors. What sets our method apart is that it offers a straightforward, evolution-inspired way to consolidate and progress, avoiding the need for manually defined styles or gradient editing [3, 4] to prevent forgetting. The agent's ability to retain and refine skills is driven by a blend of IL and RL, naturally passing down essential behaviors while implicitly discarding inferior ones. We introduce Parental Guidance (PG-1) which makes the following contributions: 1. Distributed Evolution Framework: We propose a framework that distributes the evolution process across multiple compute instances, efficiently scheduling and analyzing evolution.
Evolutionary Policy Optimization
Wang, Jianren, Su, Yifan, Gupta, Abhinav, Pathak, Deepak
Despite its extreme sample inefficiency, on-policy reinforcement learning has become a fundamental tool in real-world applications. With recent advances in GPU-driven simulation, the ability to collect vast amounts of data for RL training has scaled exponentially. However, studies show that current on-policy methods, such as PPO, fail to fully leverage the benefits of parallelized environments, leading to performance saturation beyond a certain scale. In contrast, Evolutionary Algorithms (EAs) excel at increasing diversity through randomization, making them a natural complement to RL. However, existing EvoRL methods have struggled to gain widespread adoption due to their extreme sample inefficiency. To address these challenges, we introduce Evolutionary Policy Optimization (EPO), a novel policy gradient algorithm that combines the strengths of EA and policy gradients. We show that EPO significantly improves performance across diverse and challenging environments, demonstrating superior scalability with parallelized simulations.
Simulation-Driven Balancing of Competitive Game Levels with Reinforcement Learning
Rupp, Florian, Eberhardinger, Manuel, Eckert, Kai
The balancing process for game levels in competitive two-player contexts involves a lot of manual work and testing, particularly for non-symmetrical game levels. In this work, we frame game balancing as a procedural content generation task and propose an architecture for automatically balancing of tile-based levels within the PCGRL framework (procedural content generation via reinforcement learning). Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent, and (3) a reward modeling simulation. Through repeated simulations, the balancing agent receives rewards for adjusting the level towards a given balancing objective, such as equal win rates for all players. To this end, we propose new swap-based representations to improve the robustness of playability, thereby enabling agents to balance game levels more effectively and quickly compared to traditional PCGRL. By analyzing the agent's swapping behavior, we can infer which tile types have the most impact on the balance. We validate our approach in the Neural MMO (NMMO) environment in a competitive two-player scenario. In this extended conference paper, we present improved results, explore the applicability of the method to various forms of balancing beyond equal balancing, compare the performance to another search-based approach, and discuss the application of existing fairness metrics to game balancing.
Surrogate Learning in Meta-Black-Box Optimization: A Preliminary Study
Ma, Zeyuan, Huang, Zhiyang, Chen, Jiacheng, Cao, Zhiguang, Gong, Yue-Jiao
Recent Meta-Black-Box Optimization (MetaBBO) approaches have shown possibility of enhancing the optimization performance through learning meta-level policies to dynamically configure low-level optimizers. However, existing MetaBBO approaches potentially consume massive function evaluations to train their meta-level policies. Inspired by the recent trend of using surrogate models for cost-friendly evaluation of expensive optimization problems, in this paper, we propose a novel MetaBBO framework which combines surrogate learning process and reinforcement learning-aided Differential Evolution algorithm, namely Surr-RLDE, to address the intensive function evaluation in MetaBBO. Surr-RLDE comprises two learning stages: surrogate learning and policy learning. In surrogate learning, we train a Kolmogorov-Arnold Networks (KAN) with a novel relative-order-aware loss to accurately approximate the objective functions of the problem instances used for subsequent policy learning. In policy learning, we employ reinforcement learning (RL) to dynamically configure the mutation operator in DE. The learned surrogate model is integrated into the training of the RL-based policy to substitute for the original objective function, which effectively reduces consumed evaluations during policy learning. Extensive benchmark results demonstrate that Surr-RLDE not only shows competitive performance to recent baselines, but also shows compelling generalization for higher-dimensional problems. Further ablation studies underscore the effectiveness of each technical components in Surr-RLDE. We open-source Surr-RLDE at https://github.com/GMC-DRL/Surr-RLDE.