Time to Fold, Humans: Poker-Playing AI Beats Pros at Texas Hold'em

#artificialintelligence

It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly.


Computer's defeat of professional poker players represents 'paradigm shift' in AI, say scientists

#artificialintelligence

In a feat reminiscent of the controversial victory by supercomputer'Deep Blue' over world chess champion Garry Kasparov, a computer program has managed to beat a string of professional poker players at the game. DeepStack, as it was called, defeated 10 out of 11 players who took part in a total of 3,000 games as part of a scientific study into artificial intelligence. The 11th player also lost, but by a margin that the researchers decided was not large enough to be statistically significant. This is not the first time a computer has won at poker. Libratus, a program developed by Carnegie Mellon University academics, won $1.76m (£1.4m) from professionals in January, for example.


Socially Consistent Characters in Player-Specific Stories

AAAI Conferences

In the context of interactive, virtual experiences, the use of personality models to maintain consistent character behaviour is becoming more widespread in both industry and academia. Most current techniques, however, are limited in one of three ways: either they overly restrict user actions, have a high cost for creating varied content, or rely on a representation that prohibits conveying complex content to the user.  Toward addressing these issues, we introduce Socially Consistent Role Passing, a mechanism for ensuring consistent character behaviour that leverages the design of PaSSAGE, an existing system for generating adaptive, interactive stories.  While results from previous human user studies have shown that PaSSAGE improves the enjoyment of players with little gaming experience, we present results from a new study showing that PaSSAGE's adaptive stories, augmented with Socially Consistent Role Passing, improve the enjoyment of all players versus a set of fixed-structure alternatives.


Playing Minecraft can boost your creativity levels if you CHOOSE to play the game, say scientists

Daily Mail - Science & tech

Playing video games like Minecraft may help to get your child's creative juices flowing, new research suggests. Video games that foster creative freedom can increase creativity under certain conditions, according to a study from Iowa State University (ISU). Their experiment compared the effect of playing Minecraft, with or without instruction, to watching a TV show or playing a race car video game. Those given the freedom to play Minecraft without instruction were most creative, experts found. Playing video games like Minecraft may help to get the creative juices flowing, new research suggests.


Dynamic Adaptation and Opponent Exploitation in Computer Poker

AAAI Conferences

As a classic example of imperfect information games, Heads-Up No-limit Texas Holdem (HUNL), has been studied extensively in recent years. While state-of-the-art approaches based on Nash equilibrium have been successful, they lack the ability to model and exploit opponents effectively. This paper presents an evolutionary approach to discover opponent models based Long Short Term Memory neural networks and on Pattern Recognition Trees. Experimental results showed that poker agents built in this method can adapt to opponents they have never seen in training and exploit weak strategies far more effectively than Slumbot 2017, one of the cutting-edge Nash-equilibrium-based poker agents. In addition, agents evolved through playing against relatively weak rule-based opponents tied statistically with Slumbot in heads-up matches. Thus, the proposed approach is a promising new direction for building high-performance adaptive agents in HUNL and other imperfect information games.