Evolutionary Systems


Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agents

arXiv.org Artificial Intelligence

Neuroevolution is a process of training neural networks (NN) through an evolutionary algorithm, usually to serve as a state-to-action mapping model in control or reinforcement learning-type problems. This paper builds on the Neuro Evolution of Augmented Topologies (NEAT) formalism that allows designing topology and weight evolving NNs. Fundamental advancements are made to the neuroevolution process to address premature stagnation and convergence issues, central among which is the incorporation of automated mechanisms to control the population diversity and average fitness improvement within the neuroevolution process. Insights into the performance and efficiency of the new algorithm is obtained by evaluating it on three benchmark problems from the Open AI platform and an Unmanned Aerial Vehicle (UAV) collision avoidance problem.


Ancient myths reveal early fantasies about artificial life

#artificialintelligence

Thousands of years before machine learning and self-driving cars became reality, the tales of giant bronze robot Talos, artificial woman Pandora and their creator god, Hephaestus, filled the imaginations of people in ancient Greece. A Greek vase painting, dating to about 450 B.C., depicts the death of Talos. Stanford's Adrienne Mayor examined the myth of Talos and others in her latest research. Historians usually trace the idea of automata to the Middle Ages, when the first self-moving devices were invented, but the concept of artificial, lifelike creatures dates to the myths and legends from at least about 2,700 years ago, said Adrienne Mayor, a research scholar in the Department of Classics in the School of Humanities and Sciences. These ancient myths are the subject of Mayor's latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology.


A Hybrid GA-PSO Method for Evolving Architecture and Short Connections of Deep Convolutional Neural Networks

arXiv.org Artificial Intelligence

Image classification is a difficult machine learning task, where Convolutional Neural Networks (CNNs) have been applied for over 20 years in order to solve the problem. In recent years, instead of the traditional way of only connecting the current layer with its next layer, shortcut connections have been proposed to connect the current layer with its forward layers apart from its next layer, which has been proved to be able to facilitate the training process of deep CNNs. However, there are various ways to build the shortcut connections, it is hard to manually design the best shortcut connections when solving a particular problem, especially given the design of the network architecture is already very challenging. In this paper, a hybrid evolutionary computation (EC) method is proposed to \textit{automatically} evolve both the architecture of deep CNNs and the shortcut connections. Three major contributions of this work are: Firstly, a new encoding strategy is proposed to encode a CNN, where the architecture and the shortcut connections are encoded separately; Secondly, a hybrid two-level EC method, which combines particle swarm optimisation and genetic algorithms, is developed to search for the optimal CNNs; Lastly, an adjustable learning rate is introduced for the fitness evaluations, which provides a better learning rate for the training process given a fixed number of epochs. The proposed algorithm is evaluated on three widely used benchmark datasets of image classification and compared with 12 peer Non-EC based competitors and one EC based competitor. The experimental results demonstrate that the proposed method outperforms all of the peer competitors in terms of classification accuracy.


Ancient myths reveal early fantasies about artificial life

#artificialintelligence

Thousands of years before machine learning and self-driving cars became reality, the tales of giant bronze robot Talos, artificial woman Pandora and their creator god, Hephaestus, filled the imaginations of people in ancient Greece. Historians usually trace the idea of automata to the Middle Ages, when the first self-moving devices were invented, but the concept of artificial, lifelike creatures dates to the myths and legends from at least about 2,700 years ago, said Adrienne Mayor, a research scholar in the Department of Classics in the School of Humanities and Sciences. These ancient myths are the subject of Mayor's latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. "Our ability to imagine artificial intelligence goes back to the ancient times," said Mayor, who is also a 2018-19 fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford. "Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths."


Lexicographically Ordered Multi-Objective Clustering

arXiv.org Artificial Intelligence

We introduce a rich model for multi-objective clustering with lexicographic ordering over objectives and a slack. The slack denotes the allowed multiplicative deviation from the optimal objective value of the higher priority objective to facilitate improvement in lower-priority objectives. We then propose an algorithm called Zeus to solve this class of problems, which is characterized by a makeshift function. The makeshift fine tunes the clusters formed by the processed objectives so as to improve the clustering with respect to the unprocessed objectives, given the slack. We present makeshift for solving three different classes of objectives and analyze their solution guarantees. Finally, we empirically demonstrate the effectiveness of our approach on three applications using real-world data.


A Tandem Evolutionary Algorithm for Identifying Causal Rules from Complex Data

#artificialintelligence

We propose a new evolutionary approach for discovering causal rules in complex classification problems from batch data. Key aspects include (a) the use of a hypergeometric probability mass function as a principled statistic for assessing fitness that quantifies the probability that the observed association between a given clause and target class is due to chance, taking into account the size of the dataset, the amount of missing data, and the distribution of outcome categories, (b) tandem age-layered evolutionary algorithms for evolving parsimonious archives of conjunctive clauses, and disjunctions of these conjunctions, each of which have probabilistically significant associations with outcome classes, and (c) separate archive bins for clauses of different orders, with dynamically-adjusted order-specific thresholds. The method is validated on majority-on and multiplexer benchmark problems exhibiting various combinations of heterogeneity, epistasis, overlap, noise in class associations, missing data, extraneous features, and imbalanced classes. In all synthetic epistatic benchmarks, we consistently recover the true causal rule sets used to generate the data. Finally, we discuss an application to a complex real-world survey dataset designed to inform possible ecohealth interventions for Chagas disease.


Conservative Agency via Attainable Utility Preservation

arXiv.org Artificial Intelligence

Reward functions are often misspecified. An agent optimizing an incorrect reward function can change its environment in large, undesirable, and potentially irreversible ways. Work on impact measurement seeks a means of identifying (and thereby avoiding) large changes to the environment. We propose a novel impact measure which induces conservative, effective behavior across a range of situations. The approach attempts to preserve the attainable utility of auxiliary objectives. We evaluate our proposal on an array of benchmark tasks and show that it matches or outperforms relative reachability, the state-of-the-art in impact measurement.


A Dictionary Based Generalization of Robust PCA

arXiv.org Machine Learning

ABSTRACT We analyze the decomposition of a data matrix, assumed to be a superposition of a low-rank component and a component which is sparse in a known dictionary, using a convex demixing method.We provide a unified analysis, encompassing both undercomplete and overcomplete dictionary cases, and show that the constituent components can be successfully recovered undersome relatively mild assumptions up to a certain global sparsity level. Further, we corroborate our theoretical results by presenting empirical evaluations in terms of phase transitions in rank and sparsity for various dictionary sizes. Index Terms-- Low-rank, dictionary sparse, Robust PCA. 1. INTRODUCTION Exploiting the inherent structure of data for the recovery of relevant information is at the heart of data analysis. R. A wide range of problems can be expressed in the form described above. Perhaps the most celebrated of these is principal componentanalysis (PCA) [1], which can be viewed as a special case of eq.(1), with the matrix X, the problem reduces to that of sparse recovery [2-4]; See [5] and references therein for an overview of related works.


Evolutionary Algorithms are the New Deep Learning

#artificialintelligence

Deep learning (DL) has transformed much of AI, and demonstrated how machine learning can make a difference in the real world. Its core technology is gradient descent, which has been used in neural networks since the 1980s. However, massive expansion of available training data and compute gave it a new instantiation that significantly increased its power. Evolutionary computation (EC) is on the verge of a similar breakthrough. Importantly, however, EC addresses a different but equally far-reaching problem.


A Bill of Rights for the Age of Artificial Intelligence

#artificialintelligence

In 1950, Norbert Wiener's The Human Use of Human Beings was at the cutting edge of vision and speculation in proclaiming: But this was his book's denouement, and it has left us hanging now for 68 years, lacking not only prescriptions and proscriptions but even a well-articulated "problem statement." We have since seen similar warnings about the threat of our machines, even in the form of outreach to the masses, via films like Colossus: The Forbin Project (1970), The Terminator (1984), The Matrix (1999), and Ex Machina (2015). But now the time is ripe for a major update with fresh, new perspectives -- notably focused on generalizations of our "human" rights and our existential needs. Concern has tended to focus on "us versus them" (robots) or "gray goo" (nanotech) or "monocultures of clones" (bio). To extrapolate current trends: What if we could make or grow almost anything and engineer any level of safety and efficacy desired?