The current most popular variant of poker, played in casinos and seen on television, is no-limit Texas hold'em. This game and a smaller variant, limit Texas hold'em, have been used as a testbed for artificial intelligence research since 1997. Since 2006, the Annual Computer Poker Competition has allowed researchers, programmers, and poker players to play their poker programs against each other, allowing us to find out which artificial intelligence techniques work best in practice. The competition has resulted in significant advances in fields such as computational game theory, and resulted in algorithms that can find optimal strategies for games six orders of magnitude larger than was possible using earlier techniques.
But in a new scientific study published on Monday, scientists said we're not paying nearly enough attention to the "prelude" to global extinction -- as in, the dwindling population sizes and ranges of existing species that can be a warning sign of a bigger extinction event to come. In their paper, Dirzo, Ceballos, and Stanford professor Paul Ehrlich suggested that billions of animal populations that once roamed the Earth are gone. A separate 2016 study by World Wildlife Fund said global populations of vertebrates have declined by 58 percent between 1970 and 2012. The authors of Monday's paper said their research shows "Earth's sixth mass extinction has proceeded further than most assume."
It's there you'll find the professors who solved the game of checkers, beat a top human player in the game of Go and used cutting-edge artificial intelligence to outsmart a handful of professional poker players for the very first time. He's a pioneer in a branch of artificial intelligence research known as reinforcement learning -- the computer science equivalent of treat-training a dog, except in this case the dog is an algorithm that's been incentivized to behave in a certain way. U of A computing science professors and artificial intelligence researchers (left to right) Richard Sutton, Michael Bowling and Patrick Pilarski are working with Google's DeepMind to open the AI company's first research lab outside the U.K., in Edmonton. Last week, Google's AI subsidiary DeepMind announced it was opening its first international office in Edmonton, where Sutton -- alongside professors Michael Bowling and Patrick Pilarski -- will work part-time.
Over the past three weeks, an AI poker bot called Libratus has played thousands of games of heads-up, no-limit Texas hold'em against a cadre of top professional players at Rivers Casino in Pittsburgh. Poker requires reasoning and intelligence that has proven difficult for machines to imitate. Artificial intelligence has never beaten top players at a game so lacking in information as no-limit Texas hold'em. Still, given the progress machine learning is currently making, and the fact that other AI poker bots are also being developed, that seemingly impossible challenge may not remain impossible for long.
Participants in this year's edition of the poker extravaganza will see two changes: no firm "shot clock" and the return of the tradition of crowning the tournament's main event champion in July. Buy-ins for the 74-event tournament, which runs through July 22 at the Rio All-Suite Hotel and Casino, range from $333 to $111,111.
A previous version of the bot defeated several top professional players in a tournament held at a Pittsburgh casino over several weeks this January. A new and improved version of the CMU bot--called Lengpudashi, which means "cold poker master" in Chinese--defeated a team made up of poker-playing AI researchers at the Hainan event. Around the same time that CMU's poker bot won in Pittsburgh, another research team, made up of academics from Canada and the Czech Republic, developed a poker-playing algorithm that also defeated several professional players. The event will involve pairing human players with AlphaGo to explore opportunities for collaborative play.
The University of Alberta's Computer Poker Research Group created DeepStack, an artificial intelligence program that defeated professional human poker players at heads-up, no-limit Texas hold'em. Apart from this win being the first of its kind, it bares significance in assisting to make better medical treatment recommendations to developing improved strategic defense planning, stated DeepStack: Expert-level artificial intelligence in heads-up no-limit poker, which was published in Science. In a similar case from May 11, 1997, Deep Blue, an IBM computer, outsmarted the world chess champion after six games –the computer had two wins, the champion won a single match, and there were three draws. The AI program was pitted against, "a pool of professional poker players recruited by the International Federation of Poker.
Doug Polk, one of the world's best poker players, shoveled egg whites into his mouth with a plastic fork and slurped unsweetened oatmeal from a paper cup, 13 days into the oddest tournament he has ever entered. His opponent, Claudico, did not struggle with fatigue, mental breakdown or hunger, despite... Doug Polk, one of the world's best poker players, shoveled egg whites into his mouth with a plastic fork and slurped unsweetened oatmeal from a paper cup, 13 days into the oddest tournament he has ever entered. The European Space Agency's Rosetta orbiter will commit operational suicide early Friday morning, but first it has just a little bit more science to do. The European Space Agency's Rosetta orbiter will commit operational suicide early Friday morning, but first it has just a little bit more science to do.
A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. Twenty years ago game-playing AI had a breakthrough when IBM's chess-playing supercomputer Deep Blue defeated World Chess Champion Garry Kasparov. The answers allowed DeepStack's neural networks (complex networks of computations that can "learn" over time) to develop general poker intuition that it could apply even in situations it had never encountered before.