Goto

Collaborating Authors

poker


The Deck Is Not Rigged: Poker and the Limits of AI

#artificialintelligence

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player -- or much of a poker fan, in fact -- but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely -- a view shared years later by Sandholm in his research with artificial intelligence. WHAT I LEFT OUT is a recurring feature in which book authors are invited to share anecdotes and narratives that, for whatever reason, did not make it into their final manuscripts. In this installment, Maria Konnikova shares a story that was left out of "The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win" (Penguin Press). "Poker is the main benchmark and challenge program for games of imperfect information," Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh.


Facebook develops AI algorithm that learns to play poker on the fly

#artificialintelligence

Facebook researchers have developed a general AI framework called Recursive Belief-based Learning (ReBeL) that they say achieves better-than-human performance in heads-up, no-limit Texas hold'em poker while using less domain knowledge than any prior poker AI. They assert that ReBeL is a step toward developing universal techniques for multi-agent interactions -- in other words, general algorithms that can be deployed in large-scale, multi-agent settings. Potential applications run the gamut from auctions, negotiations, and cybersecurity to self-driving cars and trucks. Combining reinforcement learning with search at AI model training and test time has led to a number of advances. Reinforcement learning is where agents learn to achieve goals by maximizing rewards, while search is the process of navigating from a start to a goal state.


Bot can beat humans in multiplayer hidden-role games

#artificialintelligence

MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret. Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world's first bot that can beat professionals in multiplayer poker. DeepMind's AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag.


Chess grandmaster Gary Kasparov predicts AI will disrupt 96 percent of all jobs

#artificialintelligence

IBM's Deep Blue wasn't supposed to defeat Chess grandmaster Gary Kasparov when the two of them had their 1997 rematch. Computer experts of the time said machines would never beat us at strategy games because human ingenuity would always triumph over brute-force analysis. After Kasparov's loss, the experts didn't miss a beat. They said Chess was too easy and postulated that machines would never beat us at Go. Champion Lee Sedol's loss against DeepMind's AlphaGo proved them wrong there. Then the experts said AI would never beat us at games where strategy could be overcome by human creativity, such as poker.


From Poincar\'e Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

arXiv.org Machine Learning

In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincar\'e recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).


Geek of the Week: Trupanion's David Jaw uses machine learning to help facilitate better pet care

#artificialintelligence

Plenty of people have a pet project that they are drawn to or consider themselves particularly good at. As the leader of the data science department at Trupanion in Seattle, David Jaw's projects are actually around pets. Jaw, GeekWire's latest Geek of the Week, uses artificial intelligence and machine learning to help automate medical insurance claims for pets, streamlining the process and removing the worry about what's covered and what's not. Born and raised in a suburb near Toronto, Jaw's family moved to Albuquerque, N.M., when he was 13 years old. He stayed there through college, where he studied mechanical engineering, pursuing a childhood dream of designing airplanes and spaceships.


Explained: The Artificial Intelligence Race is an Arms Race

#artificialintelligence

Most chess computers play a purely mathematical strategy in a game yet to be solved. They are raw calculators and look like it too. AlphaZero, at least in style, appears to play every bit like a human. It makes long-term positional plays as if it can visualize the board; spectacular piece sacrifices that no computer could ever possibly pull off, and exploitative exchanges that would make a computer, if it were able, cringe with complexity. In short, AlphaZero is a genuine intelligence.


Fictitious Play Outperforms Counterfactual Regret Minimization

arXiv.org Artificial Intelligence

In two-player zero-sum games a Nash equilibrium strategy is guaranteed to win (or tie) in expectation against any opposing strategy by the minimax theorem. In games with m ore than two players there can be multiple equilibria with different values to the players, and follow ing one has no performance guarantee; however, it was shown that a Nash equilibrium strategy defeated a variet y of agents submitted for a class project in a 3-player imperfect-information game, Kuhn poker [13]. Thi s demonstrates that Nash equilibrium strategies can be successful in practice despite the fact that they do no t have a performance guarantee. While Nash equilibrium can be computed in polynomial time fo r two-player zero-sum games, it is PPAD-hard to compute for nonzero-sum and games with 3 or mor e agents and widely believed that no efficient algorithms exist [8, 9]. Counterfactual regret mi nimization (CFR) is an iterative self-play procedure that has been proven to converge to Nash equilibrium in two-p layer zero-sum [28].


Game (Theory) for AI? An Illustrated Guide for Everyone

#artificialintelligence

I want to start off with a quick question – can you recognize the two personalities in the below image? I'm certain you got one right. For most of us early age math enthusiasts, the movie "A Beautiful Mind" is inextricably embedded into our memory. Russell Crowe plays the role of John Nash in the movie, a Nobel prize winner for economics (and the person on the left-hand side above). Now, you would remember the iconic scene often regarded as: "Don't go after the blonde". "….the best outcome would come when everyone in the group is doing what's best for himself and the group."


Artificial intelligence conquers StarCraft II in 'unimaginably unusual' AI breakthrough

#artificialintelligence

A major artificial intelligence milestone has been passed after an AI algorithm was able to defeat some of the world's best players at the real-time strategy game StarCraft II. Researchers at leading AI firm DeepMind developed a programme called AlphaStar capable of reaching the top eSport league for the popular video game, ranking among the top 0.2 per cent of all human players. A paper detailing the achievement, published in the scientific journal Nature, reveals how a technique called reinforcement learning allowed the algorithm to essentially teach itself effective strategies and counter-strategies. "The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge," said David Silver, a principal research scientist at DeepMind.