Goto

Collaborating Authors

poker


AI: The pattern is not in the data, it's in the machine

#artificialintelligence

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself. It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data. The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns. AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.


AI: The pattern is not in the data, it's in the machine

#artificialintelligence

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself. It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data. The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns. AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.


AI: The pattern is not in the data, it's in the machine

ZDNet

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself. It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data. The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns. AI programs do, indeed, result in patterns, but, just as The fault, dear Brutus, lies not in our stars but in ourselves, the fact of those patterns is not something in the data, it is what the AI program makes of the data.


Three former DeepMinders are developing A.I. to pick stocks and crypto

#artificialintelligence

Three former DeepMind employees are trying to train a machine to spot and invest in company stocks and cryptocurrencies before they rise. Martin Schmid, Rudolf Kadlec and Matej Moravcik left Alphabet-owned DeepMind in January to set up EquiLibre Technologies, relocating from Edmonton in Canada to Prague in the Czech Republic in the process. The trio all used to work at IBM and in 2017 they developed an AI called DeepStack. It became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker. Now they're looking to apply some of these concepts to financial markets.


Online Poker - Bitcoin Z Texas Holdem Poker

#artificialintelligence

Online poker with the #1 Bitcoin Z poker game. Play poker online 24/7 with the official Bitcoin Z Poker game! Texas holdem, and more poker comming! Bitcoin Z Texas Holdem is the only place where players can win cryptomoney. Play now!


Fast Algorithms for Poker Require Modelling it as a Sequential Bayesian Game

arXiv.org Artificial Intelligence

Many recent results in imperfect information games were only formulated for, or evaluated on, poker and poker-like games such as liar's dice. We argue that sequential Bayesian games constitute a natural class of games for generalizing these results. In particular, this model allows for an elegant formulation of the counterfactual regret minimization algorithm, called public-state CFR (PS-CFR), which naturally lends itself to an efficient implementation. Empirically, solving a poker subgame with 10^7 states by public-state CFR takes 3 minutes and 700 MB while a comparable version of vanilla CFR takes 5.5 hours and 20 GB. Additionally, the public-state formulation of CFR opens up the possibility for exploiting domain-specific assumptions, leading to a quadratic reduction in asymptotic complexity (and a further empirical speedup) over vanilla CFR in poker and other domains. Overall, this suggests that the ability to represent poker as a sequential Bayesian game played a key role in the success of CFR-based methods. Finally, we extend public-state CFR to general extensive-form games, arguing that this extension enjoys some - but not all - of the benefits of the version for sequential Bayesian games.


DeepMind makes bet on AI system that can play poker, chess, Go, and more

#artificialintelligence

DeepMind, the AI lab backed by Google parent company Alphabet, has long invested in game-playing AI systems. It's the lab's philosophy that games, while lacking an obvious commercial application, are uniquely relevant challenges of cognitive and reasoning capabilities. This makes them useful benchmarks of AI progress. In recent decades, games have given rise to the kind of self-learning AI that powers computer vision, self-driving cars, and natural language processing. In a continuation of its work, DeepMind has created a system called Player of Games, which the company first revealed in a research paper published on the preprint server Arxiv.org this week.


Player of Games

arXiv.org Artificial Intelligence

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning.


Human strategic decision making in parametrized games

arXiv.org Artificial Intelligence

Strong algorithms have been developed for game classes with many elements of complexity. For example, algorithms were recently able to defeat human professional players in 2-player [16, 3] and 6-player no-limit Texas hold'em [4]. These games have imperfect information, sequential actions, very large state spaces, and the latter has more than two players (solving multiplayer games is more challenging than two-player zero-sum games from a complexity-theoretic perspective). However, these algorithms all require an extremely large amount of computational resources for offline and/or online computations and for optimizing neural network hyperparameters. The algorithms also have a further limitation in that they are using all these resources just to solve for one very specific version of the game (e.g., Libratus and DeepStack assumed that all players start the hand with 200 times the big blind, and Pluribus assumed that all players start the hand with 100 times the big blind).


ELO System for Skat and Other Games of Chance

arXiv.org Artificial Intelligence

Assessing the skill level of players to predict the outcome and to rank the players in a longer series of games is of critical importance for tournament play. Besides weaknesses, like an observed continuous inflation, through a steadily increasing playing body, the ELO ranking system, named after its creator Arpad Elo, has proven to be a reliable method for calculating the relative skill levels of players in zero-sum games. The evaluation of player strength in trick-taking card games like Skat or Bridge, however, is not obvious. Firstly, these are incomplete information partially observable games with more than one player, where opponent strength should influence the scoring as it does in existing ELO systems. Secondly, they are game of both skill and chance, so that besides the playing strength the outcome of a game also depends on the deal. Last but not least, there are internationally established scoring systems, in which the players are used to be evaluated, and to which ELO should align. Based on a tournament scoring system, we propose a new ELO system for Skat to overcome these weaknesses.