Even now, all you have to do is Google search'AI 2017' to find headlines like these: '2017 laid the foundation for faster, smarter AI in 2018' 'All the creepy, crazy and amazing things that happened in AI in 2017' AI took the tech industry by storm. Swarm AI correctly predicted TIME's Person of the Year to be Donald Trump, AI moved into the household through the Amazon Echo and Google Home, and Google's DeepMind AlphaGo Zero conquered the 2,000-year-old board game'Go' through machine learning. If you didn't already know: AlphaGo literally recreated itself without the help of humans, using reinforcement learning to surpass the abilities of world champion Le Sedol and become the best Go player in the world. In 2018, poker bot Libratus was the first to beat 15 top human players, and American technology company Nvidia created AI that could mimic your facial features, handwriting, and voice. They created'celebrities' that don't even exist. Though it didn't impress everyone, with comments like: The iceberg that would later reveal the all-conquering and all-powerful force reckoned to control our entire lives – otherwise known as artificial intelligence.
Computing a good strategy in a large extensive form game often demands an extraordinary amount of computer memory, necessitating the use of abstraction to reduce the game size. Typically, strategies from abstract games perform better in the real game as the granularity of abstraction is increased. This paper investigates two techniques for stitching a base strategy in a coarse abstraction of the full game tree, to expert strategies in fine abstractions of smaller subtrees. We provide a general framework for creating static experts, an approach that generalizes some previous strategy stitching efforts. In addition, we show that static experts can create strong agents for both 2-player and 3-player Leduc and Limit Texas Hold'em poker, and that a specific class of static experts can be preferred among a number of alternatives.
AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook's AI lab and Carnegie Mellon University, has bested some of the world's top players in a series of games of six-person no-limit Texas Hold'em poker. Over 12 days and 10,000 hands, the AI system named Pluribus faced off against 12 pros in two different settings. In one, the AI played alongside five human players; in the other, five versions of the AI played with one human player (the computer programs were unable to collaborate in this scenario). Pluribus won an average of $5 per hand with hourly winnings of around $1,000 -- a "decisive margin of victory," according to the researchers.
As Mr. Elias realized, Pluribus knew when to bluff, when to call someone else's bluff and when to vary its behavior so that other players couldn't pinpoint its strategy. "It does all the things the best players in the world do," said Mr. Elias, 32, who has won a record four titles on the World Poker Tour. "And it does a few things humans have a hard time doing." Experts believe the techniques that drive this and similar systems could be used in Wall Street trading, auctions, political negotiations and cybersecurity, activities that, like poker, involve hidden information. "You don't always know the state of the real world," said Noam Brown, the Facebook researcher who oversaw the Pluribus project.
Artificial intelligence has finally cracked the biggest challenge in poker: beating top professionals in six-player no-limit Texas Hold'Em, the most popular variant of the game. Over 20,000 hands of online poker, the AI beat fifteen of the world's top poker players, each of whom has won more than $1 million USD playing the game professionally. The AI, called Pluribus, was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of Pluribus played against one professional – and did better than the pros in both. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm at Carnegie Mellon University in the US. It is an improvement on their previous poker-playing AI, called Libratus, which in 2017 outplayed professionals at Heads-Up Texas Hold'Em, a variant of the game that pits two players head to head.
During one experiment, the poker bot Pluribus played against five professional players. During one experiment, the poker bot Pluribus played against five professional players. In artificial intelligence, it's a milestone when a computer program can beat top players at a game like chess. But a game like poker, specifically six-player Texas Hold'em, has been too tough for a machine to master -- until now. Researchers say they have designed a bot called Pluribus capable of taking on poker professionals in the most popular form of poker and winning.
It knows when to hold'em and when to fold'em. And, unlike in the old Kenny Rodgers ballad, it didn't need a grizzled cowboy gambler to teach it a trick or two. A poker bot has beaten a table full of pros at six-player, no-limit Texas Hold'em, the version of the game used by most tournaments, over the course of 10,000 hands of play. To master poker at this level, the A.I. learned entirely by playing millions of hands against itself, with no guidance from human card sharks. Among the players the bot, which is called Pluribus, beat were four-time World Poker Tour champion Darren Elias as well as World Series of Poker Main Event champions Chris "Jesus" Ferguson and Greg Merson.
In 2017, a poker bot called Libratus made headlines when it roundly defeated four top human players at no-limit Texas Hold'Em. Now, Libratus' technology is being adapted to take on opponents of a different kind--in service of the US military. Libratus--Latin for balanced--was created by researchers from Carnegie Mellon University to test ideas for automated decisionmaking based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab's game-playing technology for government use, such as in war games and simulations used to explore military strategy and planning. Late in August, public records show, the company received a two-year contract of up to $10 million with the US Army.
In 2017, a poker bot called Libratus made headlines when it roundly defeated four top human players at no-limit Texas Hold'Em. Now, Libratus's technology is being adapted to take on opponents of a different kind--in service of the US military. Libratus--Latin for balanced--was created by researchers from Carnegie Mellon University to test ideas for automated decision-making based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab's game-playing technology for government use, such as in wargames and simulations used to explore military strategy and planning. Late in August, public records show, the company received a two-year contract of up to $10 million with the US Army.
As the great Kenny Rogers once said, a good gambler has to know when to hold'em and know when to fold'em. At the Rivers Casino in Pittsburgh this week, a computer program called Libratus may finally prove that computers can do this better than any human card player. Libratus is playing thousands of games of heads-up, or two-player, no-limit Texas hold'em against several expert professional poker players. Now a little more than halfway through the 20-day contest, Libratus is up by almost $800,000 against its human opponents. So victory, while far from guaranteed, may well be in the cards.