Goto

Collaborating Authors

Evolutionary Systems


Genetic algorithms: Biologically inspired, fast-converging optimization

#artificialintelligence

As you can see, beyond the details and the actual exact probability, the chances of any individual (but the first) are decreasing exponentially with k (while polynomially with m). It goes without saying that we need to apply tournament selection twice to get the pair of parents we need to generate a single element in the new population. Roulette wheel selection is definitely more complicated to implement than tournament selection, but the high-level idea is the same: higher-fitness individuals must have more chances to be selected. As we have seen, in tournament selection the probability that an element with low fitness is chosen decreases polynomially with the rank of the element (its position in the list of organisms sorted by fitness); in particular, since the probability will be O([(n-m)/n]k) the decrease will be super-linear, because k is certainly greater than 1. If, instead, we would like for lower-fitness elements to get a real chance of being selected, we could resort to a fairer selection method.


Swarm Intelligence: AI Inspired By Honeybees Can Help Us Make Better Decisions - AI Summary

#artificialintelligence

But when groups are involved, with many people grabbing the wheel at once, we often find ourselves in a fruitless stalemate headed for disaster, or worse, lurching off the road and into a ditch, seemingly just to spite ourselves. It turns out that Mother Nature has been working on this problem for hundreds of millions of years, evolving countless species that make effective decisions in large groups. A human business team trying to select the ideal location for a new factory would face a similarly complex problem and find it very difficult to choose optimally, and yet simple honeybees achieve this. They do so by forming real-time systems that efficiently combine the diverse perspectives of the hundreds of scout bees that explored the available options, enabling group deliberation that considers their differing levels of conviction until they converge on a single unified decision. It enables groups of all sizes to connect over the internet and deliberate as a unified system, pushing and pulling on decisions while swarming algorithms monitor their actions and reactions.


Stock Forecast Based On a Predictive Algorithm

#artificialintelligence

The Consumer Stocks Package is designed for investors and analysts who need predictions of the best performing stocks for the whole Consumer Industry. It includes 20 stocks with bullish and bearish signals. Package Name: Consumer Stocks Recommended Positions: Long Forecast Length: 1 Year (10/13/20 – 10/13/21) I Know First Average: 210.61% The algorithm correctly predicted 9 out of 10 the suggested trades for this 1 Year forecast. The top performing prediction from this package was GME with a return of 1459.83%.


Swarm intelligence: AI inspired by honeybees can help us make better decisions

#artificialintelligence

Let's face it, we humans make a lot of bad decisions. And even when we are deeply aware that our decisions are hurting ourselves -- like destroying our environment or propagating inequality -- we seem collectively helpless to correct course. It is exasperating, like watching a car heading for a brick wall with a driver that seems unwilling or unable to turn the wheel. Ironically, as individuals, we are not nearly as dysfunctional, most of us turning the wheel as needed to navigate our daily lives. But when groups are involved, with many people grabbing the wheel at once, we often find ourselves in a fruitless stalemate headed for disaster, or worse, lurching off the road and into a ditch, seemingly just to spite ourselves.


DeepMind & IDSIA Introduce Symmetries to Black-Box MetaRL to Improve Its Generalization Ability

#artificialintelligence

A new study from a DeepMind and Swiss AI Lab IDSIA team proposes using symmetries from backpropagation-based learning to boost the meta-generalization capabilities of black-box meta-learners. Meta reinforcement learning (RL) is a technique used to automatically discover new RL algorithms from agents' environmental interactions. While black-box approaches in this space are relatively flexible, they struggle to discover RL algorithms that can generalize to novel environments. In the paper Introducing Symmetries to Black Box Meta Reinforcement Learning, the researchers explore the role of symmetries in meta generalization and show that introducing more symmetries to black-box meta-learners can improve their ability to generalize to unseen action and observation spaces, tasks, and environments. The researchers identify three key symmetries that backpropagation-based systems exhibit: use of the same learned learning rule across all nodes of the neural network; the flexibility to work with any input, output and architecture size; and invariance to permutations of the inputs and outputs (for dense layers).


A semantic genetic programming framework based on dynamic targets - Genetic Programming and Evolvable Machines

#artificialintelligence

Semantic GP is a promising branch of GP that introduces semantic awareness during genetic evolution to improve various aspects of GP. This paper presents a new Semantic GP approach based on Dynamic Target (SGP-DT) that divides the search problem into multiple GP runs. The evolution in each run is guided by a new (dynamic) target based on the residual errors of previous runs. To obtain the final solution, SGP-DT combines the solutions of each run using linear scaling. SGP-DT presents a new methodology to produce the offspring that does not rely on the classic crossover.


A Gentle Introduction to Particle Swarm Optimization

#artificialintelligence

Particle swarm optimization (PSO) is one of the bio-inspired algorithms and it is a simple one to search for an optimal solution in the solution space. It is different from other optimization algorithms in such a way that only the objective function is needed and it is not dependent on the gradient or any differential form of the objective. It also has very few hyperparameters. In this tutorial, you will learn the rationale of PSO and its algorithm with an example. Particle Swarm Optimization was proposed by Kennedy and Eberhart in 1995.


Half a Dozen Real-World Applications of Evolutionary Multitasking and More

arXiv.org Artificial Intelligence

Until recently, the potential to transfer evolved skills across distinct optimization problem instances (or tasks) was seldom explored in evolutionary computation. The concept of evolutionary multitasking (EMT) fills this gap. It unlocks a population's implicit parallelism to jointly solve a set of tasks, hence creating avenues for skills transfer between them. Despite it being early days, the idea of EMT has begun to show promise in a range of real-world applications. In the backdrop of recent advances, the contribution of this paper is twofold. We first present a review of several application-oriented explorations of EMT in the literature, assimilating them into half a dozen broad categories according to their respective application areas. Within each category, the fundamental motivations for multitasking are discussed, together with an illustrative case study. Second, we present a set of recipes by which general problem formulations of practical interest, those that cut across different disciplines, could be transformed in the new light of EMT. We intend our discussions to not only underscore the practical utility of existing EMT methods, but also spark future research toward novel algorithms crafted for real-world deployment.


An Adaptive PID Autotuner for Multicopters with Experimental Results

arXiv.org Artificial Intelligence

This paper develops an adaptive PID autotuner for multicopters, and presents simulation and experimental results. The autotuner consists of adaptive digital control laws based on retrospective cost adaptive control implemented in the PX4 flight stack. A learning trajectory is used to optimize the autopilot during a single flight. The autotuned autopilot is then compared with the default PX4 autopilot by flying a test trajectory constructed using the second-order Hilbert curve. In order to investigate the sensitivity of the autotuner to the quadcopter dynamics, the mass of the quadcopter is varied, and the performance of the autotuned and default autopilot is compared. It is observed that the autotuned autopilot outperforms the default autopilot.


Regularization Guarantees Generalization in Bayesian Reinforcement Learning through Algorithmic Stability

arXiv.org Artificial Intelligence

In the Bayesian reinforcement learning (RL) setting, a prior distribution over the unknown problem parameters -- the rewards and transitions -- is assumed, and a policy that optimizes the (posterior) expected return is sought. A common approximation, which has been recently popularized as meta-RL, is to train the agent on a sample of $N$ problem instances from the prior, with the hope that for large enough $N$, good generalization behavior to an unseen test instance will be obtained. In this work, we study generalization in Bayesian RL under the probably approximately correct (PAC) framework, using the method of algorithmic stability. Our main contribution is showing that by adding regularization, the optimal policy becomes stable in an appropriate sense. Most stability results in the literature build on strong convexity of the regularized loss -- an approach that is not suitable for RL as Markov decision processes (MDPs) are not convex. Instead, building on recent results of fast convergence rates for mirror descent in regularized MDPs, we show that regularized MDPs satisfy a certain quadratic growth criterion, which is sufficient to establish stability. This result, which may be of independent interest, allows us to study the effect of regularization on generalization in the Bayesian RL setting.