Towards Strategic Kriegspiel Play with Opponent Modeling

AAAI Conferences

Kriesgpiel, or partially observable chess, is appealing to the AI community due to its similarity to real-world applications in which a decision maker is not a lone agent changing the environment. This paper applies the framework of Interactive POMDPs to design a competent Kriegspiel player. The novel element, compared to the existing approaches, is to model the opponent as a competent player and to predict his likely moves. The moves of our own player can then be computed based on these predictions. The problem is challenging because, first, there are many possible world states the agent has to keep track of.


CAPIR: Collaborative Action Planning with Intention Recognition

AAAI Conferences

We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game.


Artificial intelligence goes deep to beat humans at poker

#artificialintelligence

Machines are finally getting the best of humans at poker. Two artificial intelligence (AI) programs have finally proven they "know when to hold'em, and when to fold'em," recently beating human professional card players for the first time at the popular poker game of Texas Hold'em. And this week the team behind one of those AIs, known as DeepStack, has divulged some of the secrets to its success--a triumph that could one day lead to AIs that perform tasks ranging from from beefing up airline security to simplifying business negotiations. AIs have long dominated games such as chess, and last year one conquered Go, but they have made relatively lousy poker players. In DeepStack researchers have broken their poker losing streak by combining new algorithms and deep machine learning, a form of computer science that in some ways mimics the human brain, allowing machines to teach themselves.


Time to Fold, Humans: Poker-Playing AI Beats Pros at Texas Hold'em

#artificialintelligence

It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly.


An Automated Model-Based Adaptive Architecture in Modern Games

AAAI Conferences

This paper proposes an automatic model-based approach that enables adaptive decision making in modern virtual games. It builds upon the Integrated MDP and POMDP Learning AgeNT (IMPLANT) architecture which has shown to provide plausible adaptive decision making in modern games. However, it suffers from highly time-consuming manual model specification problems. By incorporating an automated priority sweeping based model builder for the MDP, as well as using the Tactical Agent Personality for the POMDP, the work in this paper aims to resolve these problems. Empirical proof of concept is shown based on an implementation in a modern game scenario, whereby the enhanced IMPLANT agent is shown to exhibit superior adaptation performance over the old IMPLANT agent whilst eliminating manual model specifications and at the same time still maintaining plausible speeds.