Collaborating Authors


Artificial Intelligence in the Creative Industries: A Review Artificial Intelligence

This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity.

GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

Model-based Reinforcement Learning: A Survey Artificial Intelligence

Sequential decision making, commonly formalized as Markov Decision Process (MDP) optimization, is a key challenge in artificial intelligence. Two key approaches to this problem are reinforcement learning (RL) and planning. This paper presents a survey of the integration of both fields, better known as model-based reinforcement learning. Model-based RL has two main steps. First, we systematically cover approaches to dynamics model learning, including challenges like dealing with stochasticity, uncertainty, partial observability, and temporal abstraction. Second, we present a systematic categorization of planning-learning integration, including aspects like: where to start planning, what budgets to allocate to planning and real data collection, how to plan, and how to integrate planning in the learning and acting loop. After these two key sections, we also discuss the potential benefits of model-based RL, like enhanced data efficiency, targeted exploration, and improved stability. Along the survey, we also draw connections to several related RL fields, like hierarchical RL and transfer, and other research disciplines, like behavioural psychology. Altogether, the survey presents a broad conceptual overview of planning-learning combinations for MDP optimization.

Optimising Game Tactics for Football Artificial Intelligence

In this paper we present a novel approach to optimise tactical and strategic decision making in football (soccer). We model the game of football as a multi-stage game which is made up from a Bayesian game to model the pre-match decisions and a stochastic game to model the in-match state transitions and decisions. Using this formulation, we propose a method to predict the probability of game outcomes and the payoffs of team actions. Building upon this, we develop algorithms to optimise team formation and in-game tactics with different objectives. Empirical evaluation of our approach on real-world datasets from 760 matches shows that by using optimised tactics from our Bayesian and stochastic games, we can increase a team chances of winning by up to 16.1\% and 3.4\% respectively.

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning Machine Learning

In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a mixing network that estimates joint action-values as a monotonic combination of per-agent values. We structurally enforce that the joint-action value is monotonic in the per-agent values, through the use of non-negative weights in the mixing network, which guarantees consistency between the centralised and decentralised policies. To evaluate the performance of QMIX, we propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning. We evaluate QMIX on a challenging set of SMAC scenarios and show that it significantly outperforms existing multi-agent reinforcement learning methods.

30 AI people in Europe to follow on Twitter Sifted


It feels like this man needs no introduction, but for anyone who doesn't know who Demis Hassabis is, here's the lowdown. He's the cofounder and chief executive of the London-headquartered DeepMind AI lab, which was acquired by Google in 2014 for £400m. Prior to DeepMind, Hassabis had his own computer games company called Elixir Studios, but his passion for games goes way back. He was a chess master at the age of 13 and the second-highest-rated under 14 player in the world at one time. Catherine Breslin is a machine learning scientist and consultant based in Cambridge.

Deep Learning Is Making Video Game Characters Move Like Real People


Computer scientists from the University of Edinburgh and Adobe Research have come up with a novel solution to the problem of making the movements of video game characters look natural. Scientists at the University of Edinburgh in the U.K. and Adobe Research used deep learning neural networks to help digital characters in video games move more realistically. The team trained a neural network on a database of motions by a live performer on a soundstage which they recorded and digitized. The network can adapt what it learned from the database to most scenarios or settings so characters move in natural-looking ways. The network is filling the gaps between a digital character's various poses and motions, intelligently and seamlessly stitching together these elements into a whole.

D-Point Trigonometric Path Planning based on Q-Learning in Uncertain Environments Artificial Intelligence

Finding the optimum path for a robot for moving from start to the goal position through obstacles is still a challenging issue. Thi s paper presents a novel path planning method, named D - point trigonometric, based on Q - learning algorithm for dynamic and uncertain environments, in which all the obstacles and the target are moving. We define a new state, action and reward functions for t he Q - learning by which the agent can find the best action in every state to reach the goal in the most appropriate path. Moreover, the experiment s in Unity3D confirmed the high convergence speed, the high hit rate, as well as the low dependency on environmental parameters of the proposed method compared with an opponent approach. The planning has been considered as a challenging concern in video games [1], transportation systems [2], and mobile robots [3] [4] . A s the most important path planning issues, w e can refer to the dynamics and the uncertainty of the environment, the smoothness and the length of the path, obstacle avoidance, and the computation al cost . In the last few decades, researchers have done numerous research efforts to present new approaches to solve them [5] [6] [7] [8] . Generally, most of the path planning approaches are categorized to one of the following methods [9] [10] [11]: ( 1) Classical methods (a) Computational geometry (CG) (b) Probabilistic r oadmap (PRM) (c) Potential fields method (PFM) ( 2) Heuristic and meta heuristic methods (a) Soft computing (b) Hybrid algorithms Since the complexity and the execution time of CG methods were high [11], PRMs were proposed to red uce the search space using techniques like milestones [12] .

Learning-Based Video Game Development in MLP@UoM: An Overview Artificial Intelligence

Learning-Based Video Game Development in MLP@UoM: An Overview * Ke Chen, Senior Member, IEEE Department of Computer Science, The University of Manchester, Manchester M13 9PL, U.K. Email: Abstract --In general, video games not only prevail in entertainment but also have become an alternative methodology for knowledge learning, skill acquisition and assistance for medical treatment as well as health care in education, vocational/military training and medicine. On the other hand, video games also provide an ideal test bed for AI researches. T o a large extent, however, video game development is still a laborious yet costly process, and there are many technical challenges ranging from game generation to intelligent agent creation. Unlike traditional methodologies, in Machine Learning and Perception Lab at the University of Manchester (MLP@UoM), we advocate applying machine learning to different tasks in video game development to address several challenges systematically. In this paper, we overview the main progress made in MLP@UoM recently and have an outlook on the future research directions in learning-based video game development arising from our works. I NTRODUCTION The video games industry has drastically grown since its inception and even surpassed the size of the film industry in 2004. Nowadays, the global revenue of the video industry still rises and increases, and the widespread availability of high-end graphics hardware have resulted in a demand for more complex video games. This in turn has increased the complexity of game development. In general, video games not only prevail in entertainment but also have become an alternative methodology for knowledge learning, skill acquisition and assistance for medical treatment as well as health care in education, vocational/military training and medicine. From an academic perspective, video games also provide an ideal test bed, which allows for researching into automatic video game development and testing new AI algorithms in such a complex yet well-structured environment with ground-truth.

Topic Modeling with Wasserstein Autoencoders Artificial Intelligence

We propose a novel neural topic model in the Wasserstein autoencoders (W AE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models.