Genetic programming (another name for evolutionary systems) creates generations of computer programs "using the principles of Darwinian natural selection and biologically inspired operations. The operations include reproduction, crossover (sexual recombination), mutation, and architecture-altering operations patterned after gene duplication and gene deletion in nature."
– Genetic Programming, Inc.
Last year, scientists created the first living machines by joining cells from African clawed frogs with tiny robots. One of them used sculpted cardiac cells to propel themselves along, push payloads, and even work collectively within a swarm of other "Xenobots." And today, the same research team announced the creation of life forms capable of self-assembly into a body from a single cell, according to a new study published in the journal Science Robotics. The Xenobots can also move more quickly, navigate varying environments, and live longer than the first models -- all while working in groups and healing if and when they're damaged. Compared to the earlier model of Xenobots (or, number 1.0) -- where the millimeter-sized automatons were made in a "top-down" style via the manual placement of tissue, shaping frog skin surgically and including cardiac cells to create motion.
SYNTHETIC cells made by combining components of Mycoplasma bacteria with a chemically synthesised genome can grow and divide into cells of uniform shape and size, just like most natural bacterial cells. In 2016, researchers led by Craig Venter at the J. Craig Venter Institute in San Diego, California, announced that they had created synthetic "minimal" cells. The genome in each cell contained just 473 key genes thought to be essential for life. The cells were named JCVI-syn3.0 But on closer inspection of the dividing cells, Elizabeth Strychalski at the US National Institute of Standards and Technology and her colleagues noticed that they weren't splitting uniformly and evenly to produce identical daughter cells as most natural bacteria do.
Researchers have used advanced AI and large sets of genomic data to unveil how humans have adapted to recent diseases. The method could also be applied to new pathogens such as the coronavirus that causes COVID-19, helping identify which gene mutations may be associated with more severe cases of the disease. The study, by researchers from Imperial College London, the Middle East Technical University, Turkey, and the Universita degli Studi di Bari Aldo Moro, Italy, is published today in a Special Issue of Molecular Ecology Resources, "Machine Learning techniques in Evolution and Ecology." Natural selection is the process by which beneficial gene mutations are preserved from generation to generation, until they become dominant in our genomes--the catalog of all our genes. One thing that can drive natural selection is protection against pathogens.
We introduce DeLeNoX (Deep Learning Novelty Explorer), a system that autonomously creates artifacts in constrained spaces according to its own evolving interestingness criterion. DeLeNoX proceeds in alternating phases of exploration and transformation. In the exploration phases, a version of novelty search augmented with constraint handling searches for maximally diverse artifacts using a given distance function. In the transformation phases, a deep learning autoencoder learns to compress the variation between the found artifacts into a lower-dimensional space. The newly trained encoder is then used as the basis for a new distance function, transforming the criteria for the next exploration phase. In the current paper, we apply DeLeNoX to the creation of spaceships suitable for use in two-dimensional arcade-style computer games, a representative problem in procedural content generation in games. We also situate DeLeNoX in relation to the distinction between exploratory and transformational creativity, and in relation to Schmidhuber's theory of creativity through the drive for compression progress.
This paper introduces SuSketch, a design tool for first person shooter levels. SuSketch provides the designer with gameplay predictions for two competing players of specific character classes. The interface allows the designer to work side-by-side with an artificially intelligent creator and to receive varied types of feedback such as path information, predicted balance between players in a complete playthrough, or a predicted heatmap of the locations of player deaths. The system also proactively designs alternatives to the level and class pairing, and presents them to the designer as suggestions that improve the predicted balance of the game. SuSketch offers a new way of integrating machine learning into mixed-initiative co-creation tools, as a surrogate of human play trained on a large corpus of artificial playtraces. A user study with 16 game developers indicated that the tool was easy to use, but also highlighted a need to make SuSketch more accessible and more explainable.
Academic research in board game playing AI has of course moved While artificial intelligence has been applied to control players' beyond most pedestrian board games, applying a diverse set of decisions in board games for over half a century, little attention algorithms for playing card games with millions of card combinations is given to games with no player competition. Pandemic is an exemplar such as Magic: the Gathering (Wizards of the Coast, 1993) , collaborative board game where all players coordinate to games of tactical card placement such as Lords of War (Black Box, overcome challenges posed by events occurring during the game's 2012)  and Carcassonne (Hans im Glück, 2000) , card games progression. This paper proposes an artificial agent which controls of team-based competition such as Hanabi (Abacusspiele, 2010)  all players' actions and balances chances of winning versus risk or Codenames (Czech Games Edition, 2015) , and many more. of losing in this highly stochastic environment. The agent applies Traditional board games such as chess  and backgammon a Rolling Horizon Evolutionary Algorithm on an abstraction of , as well as recent card games such as Race for the Galaxy (Rio the game-state that lowers the branching factor and simulates the Grande, 2007)  or digitized board games such as Hearthstone game's stochasticity. Results show that the proposed algorithm (Blizzard, 2014) [11, 18], focus on players competing to deplete another can find winning strategies more consistently in different games player's resources (pawns, hit points) or to accumulate more of varying difficulty.
This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear $Q^*/V^*$ model in which both the optimal $Q$-function and the optimal $V$-function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem. These bounds nearly match the best known sample complexity bounds for existing models. Furthermore, this framework also extends to the infinite dimensional (RKHS) setting: for the the Linear $Q^*/V^*$ model, linear MDPs, and linear mixture MDPs, we provide sample complexities that have no explicit dependence on the explicit feature dimension (which could be infinite), but instead depends only on information theoretic quantities.
One of the most important lessons from the success of deep learning is that learned representations tend to perform much better at any task compared to representations we design by hand. Yet evolution of evolvability algorithms, which aim to automatically learn good genetic representations, have received relatively little attention, perhaps because of the large amount of computational power they require. The recent method Evolvability ES allows direct selection for evolvability with little computation. However, it can only be used to solve problems where evolvability and task performance are aligned. We propose Quality Evolvability ES, a method that simultaneously optimizes for task performance and evolvability and without this restriction. Our proposed approach Quality Evolvability has similar motivation to Quality Diversity algorithms, but with some important differences. While Quality Diversity aims to find an archive of diverse and well-performing, but potentially genetically distant individuals, Quality Evolvability aims to find a single individual with a diverse and well-performing distribution of offspring. By doing so Quality Evolvability is forced to discover more evolvable representations. We demonstrate on robotic locomotion control tasks that Quality Evolvability ES, similarly to Quality Diversity methods, can learn faster than objective-based methods and can handle deceptive problems.
I could stridently insist that natural selection is the only way that complex life can evolve, but that's not strictly true. We can already design computers that can learn and reason and--almost--convince an observer that their behavior might be human. It's not unreasonable that in 100 or 200 years, our computer systems will be effectively sentient: human-like robots, similar to Star Trek's Commander Data. Alien civilizations that are considerably more advanced than us are likely already capable of such creations. The possibility--likelihood, even--of such robotic life has implications for our predictions about life on alien planets.