Collaborating Authors


Artificial intelligence beats eight world champions at bridge

The Guardian

An artificial intelligence has beaten eight world champions at bridge, a game in which human supremacy has resisted the march of the machines until now. The victory represents a new milestone for AI because in bridge players work with incomplete information and must react to the behaviour of several other players – a scenario far closer to human decision-making. In contrast, chess and Go – in both of which AIs have already beaten human champions – a player has a single opponent at a time and both are in possession of all the information. "What we've seen represents a fundamentally important advance in the state of artificial intelligence systems," said Stephen Muggleton, a professor of machine learning at Imperial College London. French startup NukkAI announced the news of its AI's victory on Friday, at the end of a two-day tournament in Paris.

Working in Artificial Intelligence and Machine Learning at Electronic Arts and Bioware Presentation, March 25, 2022 (University of Alberta)


He has been involved in many areas that make use of AI and ML at EA, particularly AI for games development and verification. He started out with game development, but is now with the AI support team, which supports all of the company's teams.

Search in Imperfect Information Games Artificial Intelligence

From the very dawn of the field, search with value functions was a fundamental concept of computer games research. Turing's chess algorithm from 1950 was able to think two moves ahead, and Shannon's work on chess from $1950$ includes an extensive section on evaluation functions to be used within a search. Samuel's checkers program from 1959 already combines search and value functions that are learned through self-play and bootstrapping. TD-Gammon improves upon those ideas and uses neural networks to learn those complex value functions -- only to be again used within search. The combination of decision-time search and value functions has been present in the remarkable milestones where computers bested their human counterparts in long standing challenging games -- DeepBlue for Chess and AlphaGo for Go. Until recently, this powerful framework of search aided with (learned) value functions has been limited to perfect information games. As many interesting problems do not provide the agent perfect information of the environment, this was an unfortunate limitation. This thesis introduces the reader to sound search for imperfect information games.

Playing With, and Against, Computers

Communications of the ACM

Games have long been a fertile testing ground for the artificial intelligence community, and not just because of their accessibility to the popular imagination. Games also enable researchers to simulate different models of human intelligence, and to quantify performance. No surprise, then, that the 2016 victory of DeepMind's AlphaGo algorithm--developed by 2019 ACM Computing Prize recipient David Silver, who leads the company's Reinforcement Learning Research Group--over world Go champion Lee Sedol generated excitement both within and outside of the computing community. As it turned out, that victory was only the beginning; subsequent iterations of the algorithm have been able to learn without any human data or prior knowledge except the rules of the game and, eventually, without even knowing the rules. Here, Silver talks about how the work evolved and what it means for the future of general-purpose AI.

Predicting Human Card Selection in Magic: The Gathering with Contextual Preference Ranking Artificial Intelligence

Drafting, i.e., the selection of a subset of items from a larger candidate set, is a key element of many games and related problems. It encompasses team formation in sports or e-sports, as well as deck selection in many modern card games. The key difficulty of drafting is that it is typically not sufficient to simply evaluate each item in a vacuum and to select the best items. The evaluation of an item depends on the context of the set of items that were already selected earlier, as the value of a set is not just the sum of the values of its members - it must include a notion of how well items go together. In this paper, we study drafting in the context of the card game Magic: The Gathering. We propose the use of a contextual preference network, which learns to compare two possible extensions of a given deck of cards. We demonstrate that the resulting network is better able to evaluate card decks in this game than previous attempts.

Artificial Intelligence System Able to Move Individual Molecules


A team of researchers at Electronic Arts have recently experimented with various artificial intelligence algorithms, including reinforcement learning models, to automate aspects of video game creation. The researchers hope that the AI models can save their developers and animators time doing repetitive tasks like coding character movement. Designing a video game, particularly the large, triple-A video games designed by large game companies, requires thousands of hours of work. As video game consoles, computers, and mobile devices become more powerful, video games themselves become more and more complex. Game developers are searching for ways to produce more game content with less effort, for example, they often choose to use procedural generation algorithms to produce landscapes and environments.

Accelerating and Improving AlphaZero Using Population Based Training Artificial Intelligence

AlphaZero has been very successful in many games. Unfortunately, it still consumes a huge amount of computing resources, the majority of which is spent in self-play. Hyperparameter tuning exacerbates the training cost since each hyperparameter configuration requires its own time to train one run, during which it will generate its own self-play records. As a result, multiple runs are usually needed for different hyperparameter configurations. This paper proposes using population based training (PBT) to help tune hyperparameters dynamically and improve strength during training time. Another significant advantage is that this method requires a single run only, while incurring a small additional time cost, since the time for generating self-play records remains unchanged though the time for optimization is increased following the AlphaZero training algorithm. In our experiments for 9x9 Go, the PBT method is able to achieve a higher win rate for 9x9 Go than the baselines, each with its own hyperparameter configuration and trained individually. For 19x19 Go, with PBT, we are able to obtain improvements in playing strength. Specifically, the PBT agent can obtain up to 74% win rate against ELF OpenGo, an open-source state-of-the-art AlphaZero program using a neural network of a comparable capacity. This is compared to a saturated non-PBT agent, which achieves a win rate of 47% against ELF OpenGo under the same circumstances.

Artificial Intelligence Game Talk, University of Alberta, Hex and Chess


U of Alberta created the first Computing Science department in Canada in 1964. It has a long tradition of research in AI (is rated 3rd in the world in machine learning). It has also led in the development of AI for strategy games. The results can be commercialized in non-game applications as well. Among these are Checkers, Chess, Go and Poker, The evening's talks were by Jonathan Schaeffer (computer chess) and Ryan Hayward (the strategy game Hex).

AI Holds the Better Hand

Communications of the ACM

Although games of skill like Go and chess have long been touchstones for intelligence, programmers have gotten steadily better at crafting programs that can now beat even the best human opponents. Only recently, however, has artificial intelligence (AI) begun to successfully challenge humans in the much more popular (and lucrative) game of poker. Part of what makes poker difficult is that the luck of the draw in this card game introduces an intrinsic randomness (although randomness is also an element of games like backgammon, at which software has beaten humans for decades). More important, though, is that in the games where computers previously have triumphed, players have "perfect information" about the state of the play up until that point. "Randomness is not nearly as hard a problem," said Michael Bowling of the University of Alberta in Canada.

A Bot Backed by Elon Musk Has Made an AI Breakthrough in Video Game World


Artificial-intelligence research group OpenAI said it created software capable of beating teams of five skilled human players in the video game Dota 2, a milestone in computer science. The achievement puts San Francisco-based OpenAI, whose backers include billionaire Elon Musk, ahead of other artificial-intelligence researchers in developing software that can master complex games combining fast, real-time action, longer-term strategy, imperfect information and team play. The ability to learn these kinds of video games at human or super-human levels is important for the advancement of AI because they more closely approximate the uncertainties and complexity of the real world than games such as chess, which IBM's software mastered in the late 1990s, or Go, which was conquered in 2016 with software created by DeepMind, the London-based AI company owned by Alphabet Inc. Dota 2 is a multiplayer science-fiction fantasy video game created by Bellevue, Washington-based Valve Corp. Each team is assigned a base on opposing ends of a map that can only be learned through exploration. Each player controls a separate character with unique powers and weapons.