Goto

Collaborating Authors

 evolutionary approach


ELENA: Epigenetic Learning through Evolved Neural Adaptation

Kriuk, Boris, Sulamanidze, Keti, Kriuk, Fedor

arXiv.org Artificial Intelligence

Optimization of complex networks is one of the fundamental challenges in computer science research. With the progression of computational resources availability, a great variety of conceptually different algorithms have been presented over the past decades to achieve competitive results in the domain of network optimization. Many approaches, such as Lin-Kernighan-Helsgaun heuristic [1], Genetic Algorithm variations [2,3,4], Ant Colony Optimization [5], k-opt local search [6,7] with sequential improvements have gained acknowledgment from both research community and industry across logistics, telecommunications, and biotechnology verticals. The Traveling Salesman Problem (TSP) [8], first formalized by Karl Menger in 1930, remains a cornerstone problem that has driven network optimization algorithmic innovations for decades. The Vehicle Routing Problem (VRP) [9,10], introduced by Dantzig and Ramser in 1959, extends TSP's complexity by incorporating multiple vehicles and capacity constraints, finding direct applications in logistics and delivery. The Maximum Clique Problem (MCP) [11], important for social network analysis, computational biochemistry and wireless network allocation, focuses on finding the largest complete subgraph within a network.


Equation discovery framework EPDE: Towards a better equation discovery

Maslyaev, Mikhail, Hvatov, Alexander

arXiv.org Artificial Intelligence

Equation discovery methods hold promise for extracting knowledge from physics-related data. However, existing approaches often require substantial prior information that significantly reduces the amount of knowledge extracted. In this paper, we enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework. In contrast to methods like SINDy, which rely on pre-defined libraries of terms and linearities, our approach generates terms using fundamental building blocks such as elementary functions and individual differentials. Within evolutionary optimization, we may improve the computation of the fitness function as is done in gradient methods and enhance the optimization algorithm itself. By incorporating multi-objective optimization, we effectively explore the search space, yielding more robust equation extraction, even when dealing with complex experimental data. We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.


An evolutionary approach for discovering non-Gaussian stochastic dynamical systems based on nonlocal Kramers-Moyal formulas

Li, Yang, Xu, Shengyuan, Duan, Jinqiao

arXiv.org Machine Learning

Discovering explicit governing equations of stochastic dynamical systems with both (Gaussian) Brownian noise and (non-Gaussian) L\'evy noise from data is chanllenging due to possible intricate functional forms and the inherent complexity of L\'evy motion. This present research endeavors to develop an evolutionary symbol sparse regression (ESSR) approach to extract non-Gaussian stochastic dynamical systems from sample path data, based on nonlocal Kramers-Moyal formulas, genetic programming, and sparse regression. More specifically, the genetic programming is employed to generate a diverse array of candidate functions, the sparse regression technique aims at learning the coefficients associated with these candidates, and the nonlocal Kramers-Moyal formulas serve as the foundation for constructing the fitness measure in genetic programming and the loss function in sparse regression. The efficacy and capabilities of this approach are showcased through its application to several illustrative models. This approach stands out as a potent instrument for deciphering non-Gaussian stochastic dynamics from available datasets, indicating a wide range of applications across different fields.


Evolutionary approaches to explainable machine learning

Zhou, Ryan, Hu, Ting

arXiv.org Artificial Intelligence

Machine learning models are increasingly being used in critical sectors, but their black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) or explainable machine learning (XML) has emerged in response to the need for human understanding of these models. Evolutionary computing, as a family of powerful optimization and learning tools, has significant potential to contribute to XAI/XML. In this chapter, we provide a brief introduction to XAI/XML and review various techniques in current use for explaining machine learning models. We then focus on how evolutionary computing can be used in XAI/XML, and review some approaches which incorporate EC techniques. We also discuss some open challenges in XAI/XML and opportunities for future research in this field using EC. Our aim is to demonstrate that evolutionary computing is well-suited for addressing current problems in explainability, and to encourage further exploration of these methods to contribute to the development of more transparent, trustworthy and accountable machine learning models.


Accessible Survey of Evolutionary Robotics and Potential Future Research Directions

Pandey, Hari Mohan

arXiv.org Artificial Intelligence

This paper reviews various Evolutionary Approaches applied to the domain of Evolutionary Robotics with the intention of resolving difficult problems in the areas of robotic design and control. Evolutionary Robotics is a fast-growing field that has attracted substantial research attention in recent years. The paper thus collates recent findings along with some anticipated applications. The reviewed literature is organized systematically to give a categorical overview of recent developments and is presented in tabulated form for quick reference. We discuss the outstanding potentialities and challenges that exist in robotics from an ER perspective, with the belief that these will be have the capacity to be addressed in the near future via the application of evolutionary approaches. The primary objective of this study is to explore the applicability of Evolutionary Approaches in robotic application development. We believe that this study will enable the researchers to utilize Evolutionary Approaches to solve complex outstanding problems in robotics.


Novel deep learning framework for symbolic regression

#artificialintelligence

Lawrence Livermore National Laboratory (LLNL) computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning conferences in the world. The conference takes place virtually May 3-7. In the paper, the LLNL team describes applying deep reinforcement learning to discrete optimization--problems that deal with discrete "building blocks" that must be combined in a particular order or configuration to optimize a desired property. The team focused on a type of discrete optimization called symbolic regression--finding short mathematical expressions that fit data gathered from an experiment.


Novel deep learning framework for symbolic regression

#artificialintelligence

A Lawrence Livermore National Laboratory team has developed a new deep reinforcement learning framework for a type of discrete optimization called symbolic regression, showing it could outperform several common methods, including commercial software gold standards, on benchmark problems. The work is being featured at the upcoming International Conference on Learning Representations. From left: LLNL team members Brenden Petersen, Mikel Landajuela, Nathan Mudhenk, Soo Kim, Ruben Glatt and Joanne Kim. Lawrence Livermore National Laboratory (LLNL) computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning conferences in the world.


Analysis of Evolutionary Program Synthesis for Card Games

Saha, Rohan, Pirlot, Cassidy

arXiv.org Artificial Intelligence

A genetic algorithm is a search heuristic that aims to find optimal solutions through ideas found in biology. This includes concepts such as survival of the fittest, mutations, crossbreeding, and more, in other words, an evolutionary approach. We previously saw in assignment 1, the performance of such an algorithm in a smaller version of the game CAN'T STOP, here we aim to evaluate the performance of it in the game RACK'O. Evolutionary approach is a method where the goal is to find a solution to a problem iteratively given a initial value. In the context of program synthesis, evolutionary approaches are used to generate a set of rules that is consistent with the game mechanics and this set of rules is expected to perform better than other scripts in the same search space. The set of rules that is obtained using a fitness function that measures how good the set of rules are given a state of the game. We chose to investigate the area of evolutionary approach in program synthesis because it is an interesting method to generate strategies and research using evolutionary approach spans over a multitude of program synthesis problems such as program sketching[1] and guided search for synthesizing programs with high complexity[2].


Optimistic variants of single-objective bilevel optimization for evolutionary algorithms

Sharma, Anuraganand

arXiv.org Artificial Intelligence

Single-objective bilevel optimization is a specialized form of constraint optimization problems where one of the constraints is an optimization problem itself. These problems are typically non-convex and strongly NP-Hard. Recently, there has been an increased interest from the evolutionary computation community to model bilevel problems due to its applicability in the real-world applications for decision-making problems. In this work, a partial nested evolutionary approach with a local heuristic search has been proposed to solve the benchmark problems and have outstanding results. This approach relies on the concept of intermarriage-crossover in search of feasible regions by exploiting information from the constraints. A new variant has also been proposed to the commonly used convergence approaches, i.e., optimistic and pessimistic. It is called extreme optimistic approach. The experimental results demonstrate the algorithm converges differently to known optimum solutions with the optimistic variants. Optimistic approach also outperforms pessimistic approach. Comparative statistical analysis of our approach with other recently published partial to complete evolutionary approaches demonstrates very competitive results.


AutoLR: An Evolutionary Approach to Learning Rate Policies

Carvalho, Pedro, Lourenço, Nuno, Assunção, Filipe, Machado, Penousal

arXiv.org Artificial Intelligence

The choice of a proper learning rate is paramount for good Artificial Neural Network training and performance. In the past, one had to rely on experience and trial-and-error to find an adequate learning rate. Presently, a plethora of state of the art automatic methods exist that make the search for a good learning rate easier. While these techniques are effective and have yielded good results over the years, they are general solutions. This means the optimization of learning rate for specific network topologies remains largely unexplored. This work presents AutoLR, a framework that evolves Learning Rate Schedulers for a specific Neural Network Architecture using Structured Grammatical Evolution. The system was used to evolve learning rate policies that were compared with a commonly used baseline value for learning rate. Results show that training performed using certain evolved policies is more efficient than the established baseline and suggest that this approach is a viable means of improving a neural network's performance.