Goto

Collaborating Authors

Differentiable Programming of Reaction-Diffusion Patterns

arXiv.org Artificial Intelligence

Reaction-Diffusion (RD) systems provide a computational framework that governs many pattern formation processes in nature. Current RD system design practices boil down to trial-and-error parameter search. We propose a differentiable optimization method for learning the RD system parameters to perform example-based texture synthesis on a 2D plane. We do this by representing the RD system as a variant of Neural Cellular Automata and using task-specific differentiable loss functions. RD systems generated by our method exhibit robust, non-trivial 'life-like' behavior.


Clone Swarms: Learning to Predict and Control Multi-Robot Systems by Imitation

arXiv.org Artificial Intelligence

-- In this paper, we propose SwarmNet - a neural network architecture that can learn to predict and imitate the behavior of an observed swarm of agents in a centralized manner . T ested on artificially generated swarm motion data, the network achieves high levels of prediction accuracy and imitation authenticity. We compare our model to previous approaches for modelling interaction systems and show how modifying components of other models gradually approaches the performance of ours. Finally, we also discuss an extension of SwarmNet that can deal with nondeterministic, noisy, and uncertain environments, as often found in robotics applications. Multi-Robot Systems (MRS) [1] describe groups of robotic agents that collectively perform complex tasks in a distributed and parallel manner through repeated interactions among each other and the environment. Such systems have attracted considerable attention in recent years with remarkable successes in a number of application domains, including defense, agriculture, logistics, disaster management, and entertainment. In particular, today's fast-paced online economy is largely fuelled by tens of thousands of warehouse robots that transport millions of items across fulfillment centers all over the world. Despite this progress, programming groups of robots to perform a joint task is still considered a complex, time-consuming, and extremely challenging endeavour. One prominent formalism for the specification of MRS is based on the identification of cost functions [2] governing the group behavior. However, this approach is not intuitive and requires a deep understanding of complex theoretical concepts across a number of mathematical fields, e.g., graph theory, manifold theory, nonlinear optimization, etc. In addition, the real-world ramifications of even small changes in a given cost function are extremely difficult to foresee.


Generalization over different cellular automata rules learned by a deep feed-forward neural network

arXiv.org Artificial Intelligence

To test generalization ability of a class of deep neural networks, we randomly generate a large number of different rule sets for 2-D cellular automata (CA), based on John Conway's Game of Life. Using these rules, we compute several trajectories for each CA instance. A deep convolutional encoder-decoder network with short and long range skip connections is trained on various generated CA trajectories to predict the next CA state given its previous states. Results show that the network is able to learn the rules of various, complex cellular automata and generalize to unseen configurations. To some extent, the network shows generalization to rule sets and neighborhood sizes that were not seen during the training at all.


A Local Approach to Forward Model Learning: Results on the Game of Life Game

arXiv.org Artificial Intelligence

This paper investigates the effect of learning a forward model on the performance of a statistical forward planning agent. We transform Conway's Game of Life simulation into a single-player game where the objective can be either to preserve as much life as possible or to extinguish all life as quickly as possible. In order to learn the forward model of the game, we formulate the problem in a novel way that learns the local cell transition function by creating a set of supervised training data and predicting the next state of each cell in the grid based on its current state and immediate neighbours. Using this method we are able to harvest sufficient data to learn perfect forward models by observing only a few complete state transitions, using either a look-up table, a decision tree or a neural network. In contrast, learning the complete state transition function is a much harder task and our initial efforts to do this using deep convolutional auto-encoders were less successful. We also investigate the effects of imperfect learned models on prediction errors and game-playing performance, and show that even models with significant errors can provide good performance.


MPLP: Learning a Message Passing Learning Protocol

arXiv.org Machine Learning

We present a novel method for learning the weights of an artificial neural network - a Message Passing Learning Protocol (MPLP). In MPLP, we abstract every operations occurring in ANNs as independent agents. Each agent is responsible for ingesting incoming multidimensional messages from other agents, updating its internal state, and generating multidimensional messages to be passed on to neighbouring agents. We demonstrate the viability of MPLP as opposed to traditional gradient-based approaches on simple feed-forward neural networks, and present a framework capable of generalizing to non-traditional neural network architectures. MPLP is meta learned using end-to-end gradient-based meta-optimisation. We further discuss the observed properties of MPLP and hypothesize its applicability on various fields of deep learning.