Goto

Collaborating Authors

 backgammon


Quantifying Skill and Chance: A Unified Framework for the Geometry of Games

Silver, David H.

arXiv.org Artificial Intelligence

We introduce a quantitative framework for separating skill and chance in games by modeling them as complementary sources of control over stochastic decision trees. We define the Skill-Luck Index S(G) in [-1, 1] by decomposing game outcomes into skill leverage K and luck leverage L. Applying this to 30 games reveals a continuum from pure chance (coin toss, S = -1) through mixed domains such as backgammon (S = 0, Sigma = 1.20) to pure skill (chess, S = +1, Sigma = 0). Poker exhibits moderate skill dominance (S = 0.33) with K = 0.40 +/- 0.03 and Sigma = 0.80. We further introduce volatility Sigma to quantify outcome uncertainty over successive turns. The framework extends to general stochastic decision systems, enabling principled comparisons of player influence, game balance, and predictive stability, with applications to game design, AI evaluation, and risk assessment.


A General Retrieval-Augmented Generation Framework for Multimodal Case-Based Reasoning Applications

Marom, Ofir

arXiv.org Artificial Intelligence

Case-based reasoning (CBR) is an experience-based approach to problem solving, where a repository of solved cases is adapted to solve new cases. Recent research shows that Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) can support the Retrieve and Reuse stages of the CBR pipeline by retrieving similar cases and using them as additional context to an LLM query. Most studies have focused on text-only applications, however, in many real-world problems the components of a case are multimodal. In this paper we present MCBR-RAG, a general RAG framework for multimodal CBR applications. The MCBR-RAG framework converts non-text case components into text-based representations, allowing it to: 1) learn application-specific latent representations that can be indexed for retrieval, and 2) enrich the query provided to the LLM by incorporating all case components for better context. We demonstrate MCBR-RAG's effectiveness through experiments conducted on a simplified Math-24 application and a more complex Backgammon application. Our empirical results show that MCBR-RAG improves generation quality compared to a baseline LLM with no contextual information provided.


Games of Knightian Uncertainty as AGI testbeds

Samothrakis, Spyridon, Soemers, Dennis J. N. J., Machlanski, Damian

arXiv.org Artificial Intelligence

Arguably, for the latter part of the late 20th and early 21st centuries, games have been seen as the drosophila of AI. Games are a set of exciting testbeds, whose solutions (in terms of identifying optimal players) would lead to machines that would possess some form of general intelligence, or at the very least help us gain insights toward building intelligent machines. Following impressive successes in traditional board games like Go, Chess, and Poker, but also video games like the Atari 2600 collection, it is clear that this is not the case. Games have been attacked successfully, but we are nowhere near AGI developments (or, as harsher critics might say, useful AI developments!). In this short vision paper, we argue that for game research to become again relevant to the AGI pathway, we need to be able to address \textit{Knightian uncertainty} in the context of games, i.e. agents need to be able to adapt to rapid changes in game rules on the fly with no warning, no previous data, and no model access.


'I craved a bite-size experience': Ben Brode on the making of Marvel Snap

The Guardian

There's a lot that is surprising about Marvel Snap, the new free-to-play digital card game from one of the minds behind Hearthstone (and the money behind TikTok). A match takes just five minutes. Both players play their cards at the same time. Perhaps the biggest surprise, as the game launches its second monthly season, is that it's really, really good. I spoke to Ben Brode, the co-founder of Snap's developer Second Dinner, about what Snap is, how the team set out to fix the problems of existing trading card games, and where they're going from here. The rules of Marvel Snap are endearingly simple, especially compared with the complexity typical of card games.


Artificial intelligence can learn to play a complex war game

#artificialintelligence

In the world of game theory, we refer to games such as Catan, Risk, and Civilization 6 as large-scale strategy games. The defining trait of these games is their massive number of components and how they interact. Games often give players the option to compete against the computer. These computer players are called artificial intelligence (AIs). The purpose of these AIs is to give players an equal challenge.


Temporal difference learning and TD-Gammon

AITopics Original Links

Complex board games are a natural testing ground for machine learning and artificial intelligence. They are based on experience; they are attractive; and they do not have the safety requirements that sometimes block the use of heuristic methods. Despite recent advances, computer chess seems not to be a success of machine learning as such, because of its reliance on brute force search rather than "intelligent" approaches. This paper presents an interesting example of an opposite situation, the game-learning program TD-Gammon. TD-Gammon is a neural network that trains itself to play backgammon by playing against itself and learning from the outcome.


Building an AI that Can Beat You at Your Own Game – Towards Data Science

#artificialintelligence

The full instructions are here, and a sample game is here. AIs are now better than humans at Backgammon, Checkers, Chess, Othello, and Go. See Audrey Keurenkov's A'Brief' History of Game AI Up to AlphaGo for a more in-depth timeline. In 2017, Michael Tucker, Nikhil Prabala, and I set out to create PAI, the world's first AI for Pathwayz. The AIs for Othello and Backgammon were especially relevant to our development of PAI. Othello, like Pathwayz, is a relatively young game -- at least compared to the ancient Backgammon, Checkers, Chess, and Go.


What You Need to Know About Machine Learning - Part 2 - Phrasee

#artificialintelligence

Note: If you have already read part 1 of this series, you are already well on your way to becoming a machine learning expert. If not, you should read it now. When considering machine learning as a concept, it is important to remember that it is a complex field. One that's rife with categories and subcategories, with yet more subcategories being added by the day. To delve too deeply into all of these would be to curse you, dear reader, to several torturous hours of maths and more maths until you would simply give up and decide to watch YouTube videos about X-rays of objects found in people's butts.


A Conversation with Christos Papadimitriou

AITopics Original Links

Christos Papadimitriou, the C. Lester Hogan Professor of Electrical Engineering and Computer Science at the University of California at Berkeley, is this year's recipient of the Katyanagi Prize for Research Excellence. Carnegie Mellon University has cited Dr. Papadimitriou as "an internationally recognized expert on the theory of algorithms and complexity, and its applications to databases, optimization, artificial intelligence, networks and game theory." We recently spoke with Papadimitriou, where among other topics we delved into the underpinnings of science, the economics of the programming market, the mysterious complexity of the Web, quantum computing, and the computer scientist as popular novelist. Next month, we talk with Dr. Erik Demaine, recipient of this year's Katyanagi Emerging Leadership Prize. CP: I didn't know I had been nominated. She mentioned the previous winner, so I thought someone else won the prize and that I was invited to speak at the ceremony. I replied, "Yeah, okay, let me think about it, give me a week..." She wrote back in astonishment, thinking I was not accepting the prize!


Before AlphaGo there was TD-Gammon -- Jim Fleming

#artificialintelligence

Check out the Github repo for an implementation of TD-Gammon with TensorFlow. A few weeks ago AlphaGo won a historic tournament playing the game of Go against Lee Sedol, one of the top Go players in the world. Many people have compared AlphaGo to DeepBlue, which won a series of famous chess matches against Gary Kasparov, but a different comparison may be made for the game of backgammon. Before DeepMind tackled playing Atari games or built AlphaGo there was TD-Gammon, the first algorithm to reach an expert level of play in backgammon. Gerald Tesauro published his paper in 1992 describing TD-Gammon as a neural network trained with reinforcement learning.