Goto

Collaborating Authors

Results


Facebook develops AI algorithm that learns to play poker on the fly

#artificialintelligence

Facebook researchers have developed a general AI framework called Recursive Belief-based Learning (ReBeL) that they say achieves better-than-human performance in heads-up, no-limit Texas hold'em poker while using less domain knowledge than any prior poker AI. They assert that ReBeL is a step toward developing universal techniques for multi-agent interactions -- in other words, general algorithms that can be deployed in large-scale, multi-agent settings. Potential applications run the gamut from auctions, negotiations, and cybersecurity to self-driving cars and trucks. Combining reinforcement learning with search at AI model training and test time has led to a number of advances. Reinforcement learning is where agents learn to achieve goals by maximizing rewards, while search is the process of navigating from a start to a goal state.


On Strategy Stitching in Large Extensive Form Multiplayer Games

Neural Information Processing Systems

Computing a good strategy in a large extensive form game often demands an extraordinary amount of computer memory, necessitating the use of abstraction to reduce the game size. Typically, strategies from abstract games perform better in the real game as the granularity of abstraction is increased. This paper investigates two techniques for stitching a base strategy in a coarse abstraction of the full game tree, to expert strategies in fine abstractions of smaller subtrees. We provide a general framework for creating static experts, an approach that generalizes some previous strategy stitching efforts. In addition, we show that static experts can create strong agents for both 2-player and 3-player Leduc and Limit Texas Hold'em poker, and that a specific class of static experts can be preferred among a number of alternatives.


Facebook AI just beat professional poker players in a major artificial intelligence breakthrough

#artificialintelligence

Facebook has achieved a major milestone in artificial intelligence (AI) thanks to one of its systems beating six professional poker players at no-limit Texas hold'em. The Pluribus AI defeated renowned players including Darren Elias, who holds the record for most World Poker Tour titles. Beating poker pros has been a major challenge for AI researchers, as the best players need to be good at bluffing and unpredictable. "Playing a six-player game rather than head-to-head requires fundamental changes in how the AI develops its playing strategy," said Noam Brown, a research scientist at Facebook AI. "We're elated with its performance and believe some of Pluribus's playing strategies might even change the way pros play the game." The breakthrough comes two years after an AI algorithm developed by Google-owned DeepMind helped a computer beat a human champion at the notoriously complicated board game Go for the first time.


Artificial Intelligence Masters The Game of Poker – What Does That Mean For Humans?

#artificialintelligence

While AI had some success at beating humans at other games such as chess and Go (games that follow predefined rules and aren't random), winning at poker proved to be more challenging because it requires strategy, intuition, and reasoning based on hidden information. Despite the challenges, artificial intelligence can now play--and win--poker. Artificial intelligence systems including DeepStack and Libratus paved the way for Pluribus, the AI that beat five other players in six-player Texas Hold'em, the most popular version of poker. This feat goes beyond games. This achievement means that artificial intelligence can now expand to help solve some of the world's most challenging issues.


Artificial Intelligence Masters The Game of Poker – What Does That Mean For Humans?

#artificialintelligence

While AI had some success at beating humans at other games such as chess and Go (games that follow predefined rules and aren't random), winning at poker proved to be more challenging because it requires strategy, intuition, and reasoning based on hidden information. Despite the challenges, artificial intelligence can now play--and win--poker. Artificial intelligence systems including DeepStack and Libratus paved the way for Pluribus, the AI that beat five other players in six-player Texas Hold'em, the most popular version of poker. This feat goes beyond games. This achievement means that artificial intelligence can now expand to help solve some of the world's most challenging issues.


Artificial Intelligence Masters The Game of Poker – What Does That Mean For Humans?

#artificialintelligence

While AI had some success at beating humans at other games such as chess and Go (games that follow predefined rules and aren't random), winning at poker proved to be more challenging because it requires strategy, intuition, and reasoning based on hidden information. Despite the challenges, artificial intelligence can now play--and win--poker. Artificial Intelligence Masters The Game of Poker – What Does That Mean For Humans? Artificial intelligence systems including DeepStack and Libratus paved the way for Pluribus, the AI that beat five other players in six-player Texas Hold'em, the most popular version of poker. This feat goes beyond games. This achievement means that artificial intelligence can now expand to help solve some of the world's most challenging issues.


Superhuman AI for multiplayer poker

#artificialintelligence

Computer programs have shown superiority over humans in two-player games such as chess, Go, and heads-up, no-limit Texas hold'em poker. However, poker games usually include six players--a much trickier challenge for artificial intelligence than the two-player variant. Brown and Sandholm developed a program, dubbed Pluribus, that learned how to play six-player no-limit Texas hold'em by playing against five copies of itself (see the Perspective by Blair and Saffidine). When pitted against five elite professional poker players, or with five copies of Pluribus playing against one professional, the computer performed significantly better than humans over the course of 10,000 hands of poker. Science, this issue p. 885; see also p. 864


Facebook and CMU's 'superhuman' poker AI beats human pros

#artificialintelligence

AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook's AI lab and Carnegie Mellon University, has bested some of the world's top players in a series of games of six-person no-limit Texas Hold'em poker. Over 12 days and 10,000 hands, the AI system named Pluribus faced off against 12 pros in two different settings. In one, the AI played alongside five human players; in the other, five versions of the AI played with one human player (the computer programs were unable to collaborate in this scenario). Pluribus won an average of $5 per hand with hourly winnings of around $1,000 -- a "decisive margin of victory," according to the researchers.


AI program beats pros in six-player poker in world first - Taipei Times

#artificialintelligence

Artificial intelligence (AI) programs have bested humans in checkers, chess, go and two-player poker, but multiplayer poker was always believed to be a bigger ask. Researchers at Carnegie Mellon University, working with Facebook's AI initiative, on Thursday announced that their program defeated a group of top professionals in six-player no-limit Texas Hold'em. The program, Pluribus, and its big wins were described in the US journal Science. "Pluribus achieved superhuman performance at multiplayer poker, which is a recognized milestone in artificial intelligence and in game theory," Carnegie Mellon computer science professor Tuomas Sandholm said. Sandholm worked with Noam Brown, who is working at Facebook AI while completing his doctorate at the Pittsburgh-based university.


Superhuman AI for multiplayer poker

#artificialintelligence

In recent years there have been great strides in artificial intelligence (AI), with games often serving as challenge problems, benchmarks, and milestones for progress. Poker has served for decades as such a challenge problem. Past successes in such benchmarks, including poker, have been limited to two-player games. However, poker in particular is traditionally played with more than two players. Multiplayer games present fundamental additional issues beyond those in two-player games, and multiplayer poker is a recognized AI milestone. In this paper we present Pluribus, an AI that we show is stronger than top human professionals in six-player no-limit Texas hold'em poker, the most popular form of poker played by humans. Poker has served as a challenge problem for the fields of artificial intelligence (AI) and game theory for decades (1). In fact, the foundational papers on game theory used poker to illustrate their concepts (2, 3). The reason for this choice is simple: no other popular recreational game captures the challenges of hidden information as effectively and as elegantly as poker. Although poker has been useful as a benchmark for new AI and game-theoretic techniques, the challenge of hidden information in strategic settings is not limited to recreational games.