Evolutionary Systems


Model of quantum artificial life on quantum computer

#artificialintelligence

The algorithm follows a protocol that the researchers refer to as biomimetic and which encodes quantum behaviours adapted to the same behaviours of living systems. Quantum biomimetics involves reproducing in quantum systems certain properties exclusive to living beings, and this research group had previously managed to imitate life, natural selection, learning and memory by means of quantum systems. This research aimed, as the authors themselves describe, "to design a set of quantum algorithms based on the imitation of biological processes, which take place in complex organisms, and transfer them to a quantum scale, so we were only trying to imitate the key aspects in these processes." In the scenario of artificial life that they designed, a set of models of simple organisms are capable of accomplishing the most common phases of life in a controlled virtual environment, and have proven that microscopic quantum systems are able to encode quantum characteristics and biological behaviours that are normally associated with living systems and natural selection. The models of organism designed were coined as units of quantum life, each one of which is made up of two qubits that act as genotype and phenotype, respectively, and where the genotype contains the information that describes the type of living unit, and this information is transmitted from generation to generation.


Machine learning spots natural selection at work in human genome

#artificialintelligence

The ability to sequence genomes quickly has provided scientists with reams of data, but understanding how evolution has shaped humans is still a difficult task.Credit: Guy Tear/Wellcome Coll./CC Pinpointing where and how the human genome is evolving can be like hunting for a needle in a haystack. Each person's genome contains three billion building blocks called nucleotides, and researchers must compile data from thousands of people to discover patterns that signal how genes have been shaped by evolutionary pressures. To find these patterns, a growing number of geneticists are turning to a form of machine learning called deep learning. Proponents of the approach say that deep-learning algorithms incorporate fewer explicit assumptions about what the genetic signatures of natural selection should look like than do conventional statistical methods.


Layout Design for Intelligent Warehouse by Evolution with Fitness Approximation

arXiv.org Artificial Intelligence

With the rapid growth of the express industry, intelligent warehouses that employ autonomous robots for carrying parcels have been widely used to handle the vast express volume. For such warehouses, the warehouse layout design plays a key role in improving the transportation efficiency. However, this work is still done by human experts, which is expensive and leads to suboptimal results. In this paper, we aim to automate the warehouse layout designing process. We propose a two-layer evolutionary algorithm to efficiently explore the warehouse layout space, where an auxiliary objective fitness approximation model is introduced to predict the outcome of the designed warehouse layout and a two-layer population structure is proposed to incorporate the approximation model into the ordinary evolution framework. Empirical experiments show that our method can efficiently design effective warehouse layouts that outperform both heuristic-designed and vanilla evolution-designed warehouse layouts.


Genetic algorithm for optimal distribution in cities

arXiv.org Artificial Intelligence

ABSTRACT The problem to deal with in this project is the problem of routing electric vehicles, which consists of finding the best routes for this type of vehicle, so that they reach their destination, without running out of power and optimizing to the maximum transportation costs. The importance of this problem is mainly in the sector of shipments in the recent future, when obsolete energy sources are replaced with renewable sources, where each vehicle contains a number of packages that must be delivered at specific points in the city, but, being electric, they do not have an optimal battery life, so having the ideal routes traced is a vital aspect for the proper functioning of these. Now days you can see applications of this problem in the cleaning sector, specifically with the trucks responsible for collecting garbage, which aims to travel the entire city in the most efficient way, without letting excessive garbage accumulate. PAGE SIZE All material on each page should fit within a rectangle of 18 23.5 cm (7" 9.25"), centered on the page, beginning 1.9 cm (0.75") from the top of the page and ending with 2.54 cm (1") from the bottom. The right and left margins should be 1.9 cm (.75"). The text should be in two 8.45 cm (3.33") columns with a .83


Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

arXiv.org Artificial Intelligence

Networks are fundamental building blocks for representing data, and computations. Remarkable progress in learning in structurally defined (shallow or deep) networks has recently been achieved. Here we introduce evolutionary exploratory search and learning method of topologically flexible networks under the constraint of producing elementary computational steady-state input-output operations. Our results include; (1) the identification of networks, over four orders of magnitude, implementing computation of steady-state input-output functions, such as a band-pass filter, a threshold function, and an inverse band-pass function. Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state. Furthermore, we find that the fraction of required driver nodes is constant during evolutionary learning, suggesting a stable system design. (3), our framework allows multiplexing of different computations using the same network. For example, using a binary representation of the inputs, the network can readily compute three different input-output functions. Finally, (4) the proposed evolutionary learning demonstrates transfer learning. If the system learns one function A, then learning B requires on average less number of steps as compared to learning B from tabula rasa. We conclude that the constrained evolutionary learning produces large robust controllable circuits, capable of multiplexing and transfer learning. Our study suggests that network-based computations of steady-state functions, representing either cellular modules of cell-to-cell communication networks or internal molecular circuits communicating within a cell, could be a powerful model for biologically inspired computing. This complements conceptualizations such as attractor based models, or reservoir computing.


Analyzing different prototype selection techniques for dynamic classifier and ensemble selection

arXiv.org Machine Learning

Abstract--In dynamic selection (DS) techniques, only the most competent classifiers, for the classification of a specific test sample are selected to predict the sample's class labels. The more important step in DES techniques is estimating the competence of the base classifiers for the classification of each specific test sample. The classifiers' competence is usually estimated using the neighborhood of the test sample defined on the validation samples, called the region of competence. Thus, the performance of DS techniques is sensitive to the distribution of the validation set. In this paper, we evaluate six prototype selection techniques that work by editing the validation data in order to remove noise and redundant instances. Experiments conducted using several state-of-the-art DS techniques over 30 classification problems demonstrate that by using prototype selection techniques we can improve the classification accuracy of DS techniques and also significantly reduce the computational cost involved. Multiple Classifier Systems (MCS) aim to combine classifiers in order to increase the recognition accuracy in pattern recognition systems [1], [2]. MCS are composed of three phases [3]: (1) Generation, (2) Selection, and (3) Integration.


META-DES.Oracle: Meta-learning and feature selection for ensemble selection

arXiv.org Machine Learning

The key issue in Dynamic Ensemble Selection (DES) is defining a suitable criterion for calculating the classifiers' competence. There are several criteria available to measure the level of competence of base classifiers, such as local accuracy estimates and ranking. However, using only one criterion may lead to a poor estimation of the classifier's competence. In order to deal with this issue, we have proposed a novel dynamic ensemble selection framework using meta-learning, called META-DES. An important aspect of the META-DES framework is that multiple criteria can be embedded in the system encoded as different sets of meta-features. However, some DES criteria are not suitable for every classification problem. For instance, local accuracy estimates may produce poor results when there is a high degree of overlap between the classes. Moreover, a higher classification accuracy can be obtained if the performance of the meta-classifier is optimized for the corresponding data. In this paper, we propose a novel version of the META-DES framework based on the formal definition of the Oracle, called META-DES.Oracle. The Oracle is an abstract method that represents an ideal classifier selection scheme. A meta-feature selection scheme using an overfitting cautious Binary Particle Swarm Optimization (BPSO) is proposed for improving the performance of the meta-classifier. The difference between the outputs obtained by the meta-classifier and those presented by the Oracle is minimized. Thus, the meta-classifier is expected to obtain results that are similar to the Oracle. Experiments carried out using 30 classification problems demonstrate that the optimization procedure based on the Oracle definition leads to a significant improvement in classification accuracy when compared to previous versions of the META-DES framework and other state-of-the-art DES techniques.


Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation

arXiv.org Machine Learning

Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain.


Challenges of Generalization in Machine Learning

#artificialintelligence

Neural networks can be sensitive to the starting point (i.e. Similar behavior is observed with random forest models due to random effects in searching the space for the model. The selection of folds can also introduce variations from one set of runs to another, if the folds vary. Generally, it is advised to take the folds in a uniform way stepping through the data. In most tutorials, you are advised to fix the "seed" for the random number generator in your programming language to avoid variations when trying to repeat runs.


MaaSim: A Liveability Simulation for Improving the Quality of Life in Cities

arXiv.org Machine Learning

Urbanism is no longer planned on paper thanks to powerful models and 3D simulation platforms. However, current work is not open to the public and lacks an optimisation agent that could help in decision making. This paper describes the creation of an open-source simulation based on an existing Dutch liveability score with a built-in AI module. Features are selected using feature engineering and Random Forests. Then, a modified scoring function is built based on the former liveability classes. The score is predicted using Random Forest for regression and achieved a recall of 0.83 with 10-fold cross-validation. Afterwards, Exploratory Factor Analysis is applied to select the actions present in the model. The resulting indicators are divided into 5 groups, and 12 actions are generated. The performance of four optimisation algorithms is compared, namely NSGA-II, PAES, SPEA2 and eps-MOEA, on three established criteria of quality: cardinality, the spread of the solutions, spacing, and the resulting score and number of turns. Although all four algorithms show different strengths, eps-MOEA is selected to be the most suitable for this problem. Ultimately, the simulation incorporates the model and the selected AI module in a GUI written in the Kivy framework for Python. Tests performed on users show positive responses and encourage further initiatives towards joining technology and public applications.