Goto

Collaborating Authors

Results


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Optimising Game Tactics for Football

arXiv.org Artificial Intelligence

In this paper we present a novel approach to optimise tactical and strategic decision making in football (soccer). We model the game of football as a multi-stage game which is made up from a Bayesian game to model the pre-match decisions and a stochastic game to model the in-match state transitions and decisions. Using this formulation, we propose a method to predict the probability of game outcomes and the payoffs of team actions. Building upon this, we develop algorithms to optimise team formation and in-game tactics with different objectives. Empirical evaluation of our approach on real-world datasets from 760 matches shows that by using optimised tactics from our Bayesian and stochastic games, we can increase a team chances of winning by up to 16.1\% and 3.4\% respectively.


Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

arXiv.org Machine Learning

In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a mixing network that estimates joint action-values as a monotonic combination of per-agent values. We structurally enforce that the joint-action value is monotonic in the per-agent values, through the use of non-negative weights in the mixing network, which guarantees consistency between the centralised and decentralised policies. To evaluate the performance of QMIX, we propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning. We evaluate QMIX on a challenging set of SMAC scenarios and show that it significantly outperforms existing multi-agent reinforcement learning methods.


30 AI people in Europe to follow on Twitter Sifted

#artificialintelligence

It feels like this man needs no introduction, but for anyone who doesn't know who Demis Hassabis is, here's the lowdown. He's the cofounder and chief executive of the London-headquartered DeepMind AI lab, which was acquired by Google in 2014 for £400m. Prior to DeepMind, Hassabis had his own computer games company called Elixir Studios, but his passion for games goes way back. He was a chess master at the age of 13 and the second-highest-rated under 14 player in the world at one time. Catherine Breslin is a machine learning scientist and consultant based in Cambridge.


Deep Learning Is Making Video Game Characters Move Like Real People

#artificialintelligence

Computer scientists from the University of Edinburgh and Adobe Research have come up with a novel solution to the problem of making the movements of video game characters look natural. Scientists at the University of Edinburgh in the U.K. and Adobe Research used deep learning neural networks to help digital characters in video games move more realistically. The team trained a neural network on a database of motions by a live performer on a soundstage which they recorded and digitized. The network can adapt what it learned from the database to most scenarios or settings so characters move in natural-looking ways. The network is filling the gaps between a digital character's various poses and motions, intelligently and seamlessly stitching together these elements into a whole.


D-Point Trigonometric Path Planning based on Q-Learning in Uncertain Environments

arXiv.org Artificial Intelligence

Finding the optimum path for a robot for moving from start to the goal position through obstacles is still a challenging issue. Thi s paper presents a novel path planning method, named D - point trigonometric, based on Q - learning algorithm for dynamic and uncertain environments, in which all the obstacles and the target are moving. We define a new state, action and reward functions for t he Q - learning by which the agent can find the best action in every state to reach the goal in the most appropriate path. Moreover, the experiment s in Unity3D confirmed the high convergence speed, the high hit rate, as well as the low dependency on environmental parameters of the proposed method compared with an opponent approach. The planning has been considered as a challenging concern in video games [1], transportation systems [2], and mobile robots [3] [4] . A s the most important path planning issues, w e can refer to the dynamics and the uncertainty of the environment, the smoothness and the length of the path, obstacle avoidance, and the computation al cost . In the last few decades, researchers have done numerous research efforts to present new approaches to solve them [5] [6] [7] [8] . Generally, most of the path planning approaches are categorized to one of the following methods [9] [10] [11]: ( 1) Classical methods (a) Computational geometry (CG) (b) Probabilistic r oadmap (PRM) (c) Potential fields method (PFM) ( 2) Heuristic and meta heuristic methods (a) Soft computing (b) Hybrid algorithms Since the complexity and the execution time of CG methods were high [11], PRMs were proposed to red uce the search space using techniques like milestones [12] .


Learning-Based Video Game Development in MLP@UoM: An Overview

arXiv.org Artificial Intelligence

Learning-Based Video Game Development in MLP@UoM: An Overview * Ke Chen, Senior Member, IEEE Department of Computer Science, The University of Manchester, Manchester M13 9PL, U.K. Email: Ke.Chen@manchester.ac.uk Abstract --In general, video games not only prevail in entertainment but also have become an alternative methodology for knowledge learning, skill acquisition and assistance for medical treatment as well as health care in education, vocational/military training and medicine. On the other hand, video games also provide an ideal test bed for AI researches. T o a large extent, however, video game development is still a laborious yet costly process, and there are many technical challenges ranging from game generation to intelligent agent creation. Unlike traditional methodologies, in Machine Learning and Perception Lab at the University of Manchester (MLP@UoM), we advocate applying machine learning to different tasks in video game development to address several challenges systematically. In this paper, we overview the main progress made in MLP@UoM recently and have an outlook on the future research directions in learning-based video game development arising from our works. I NTRODUCTION The video games industry has drastically grown since its inception and even surpassed the size of the film industry in 2004. Nowadays, the global revenue of the video industry still rises and increases, and the widespread availability of high-end graphics hardware have resulted in a demand for more complex video games. This in turn has increased the complexity of game development. In general, video games not only prevail in entertainment but also have become an alternative methodology for knowledge learning, skill acquisition and assistance for medical treatment as well as health care in education, vocational/military training and medicine. From an academic perspective, video games also provide an ideal test bed, which allows for researching into automatic video game development and testing new AI algorithms in such a complex yet well-structured environment with ground-truth.


Topic Modeling with Wasserstein Autoencoders

arXiv.org Artificial Intelligence

We propose a novel neural topic model in the Wasserstein autoencoders (W AE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models.



Design of Artificial Intelligence Agents for Games using Deep Reinforcement Learning

arXiv.org Artificial Intelligence

In order perform a large variety of tasks and to achieve human-level performance in complex real-world environments, Artificial Intelligence (AI) Agents must be able to learn from their past experiences and gain both knowledge and an accurate representation of their environment from raw sensory inputs. Traditionally, AI agents have suffered from difficulties in using only sensory inputs to obtain a good representation of their environment and then mapping this representation to an efficient control policy. Deep reinforcement learning algorithms have provided a solution to this issue. In this study, the performance of different conventional and novel deep reinforcement learning algorithms was analysed. The proposed method utilises two types of algorithms, one trained with a variant of Q-learning (DQN) and another trained with SARSA learning (DSN) to assess the feasibility of using direct feedback alignment, a novel biologically plausible method for back-propagating the error. These novel agents, alongside two similar agents trained with the conventional backpropagation algorithm, were tested by using the OpenAI Gym toolkit on several classic control theory problems and Atari 2600 video games. The results of this investigation open the way into new, biologically-inspired deep reinforcement learning algorithms, and their implementation on neuromorphic hardware.