Collaborating Authors


On limitations of learning algorithms in competitive environments Artificial Intelligence

Playing human games such as chess and Go has long been considered to be a major benchmark of human capabilities. Computer programs have become robust chess players and, since the late 1990s, have been able to beat even the best human chess champions; though, for a long time, computers were unable to beat expert Go players -- the game of Go has proven to be especially difficult for computers. However, in 2016, a new program called AlphaGo finally won a victory over a human Go champion, only to be beaten by its subsequent versions (AlphaGo Zero and AlphaZero). AlphaZero proceeded to beat the best computers and humans in chess, shogi and Go, including all its predecessors from the Alpha family [1]. Core to AlphaZero's success is its use of a deep neural network, trained through reinforcement learning, as a powerful heuristic to guide a tree search algorithm (specifically Monte Carlo Tree Search). The recent successes of machine learning are good reason to consider the limitations of learning algorithms and, in a broader sense, the limitations of AI. In the context of a particular competition (or'game'), a natural question to ask is whether an absolute winner AI might exist -- one that, given sufficient resources, will always achieve the best possible outcome.

A Novel Machine Learning Method for Preference Identification Artificial Intelligence

Human preference or taste within any domain is usually a difficult thing to identify or predict with high probability. In the domain of chess problem composition, the same is true. Traditional machine learning approaches tend to focus on the ability of computers to process massive amounts of data and continuously adjust 'weights' within an artificial neural network to better distinguish between say, two groups of objects. Contrasted with chess compositions, there is no clear distinction between what constitutes one and what does not; even less so between a good one and a poor one. We propose a computational method that is able to learn from existing databases of 'liked' and 'disliked' compositions such that a new and unseen collection can be sorted with increased probability of matching a solver's preferences. The method uses a simple 'change factor' relating to the Forsyth-Edwards Notation (FEN) of each composition's starting position, coupled with repeated statistical analysis of sample pairs from both databases. Tested using the author's own collections of computer-generated chess problems, the experimental results showed that the method was able to sort a new and unseen collection of compositions such that, on average, over 70% of the preferred compositions were in the top half of the collection. This saves significant time and energy on the part of solvers as they are likely to find more of what they like sooner. The method may even be applicable to other domains such as image processing because it does not rely on any chess-specific rules but rather just a sufficient and quantifiable 'change' in representation from one object to the next.

Creating A Chess AI using Deep Learning


When Gary Kasparov was dethroned by IBM's Deep Blue chess algorithm, the algorithm did not use Machine Learning, or at least in the way that we define Machine Learning today. This article aims to use Neural Networks to create a successful chess AI, by using Neural Networks, a newer form of machine learning algorithms. Using a chess dataset with over 20,000 instances (contact at for dataset), the Neural Network should output a move, when given a chess-board. These libraries are the prerequisites to create the program: os and pandas are to access the dataset, python-chess is an "instant" chess-board to test the neural network. Numpy is necessary to perform matrix manipulation.

Finite Group Equivariant Neural Networks for Games Machine Learning

Games such as go, chess and checkers have multiple equivalent game states, i.e. multiple board positions where symmetrical and opposite moves should be made. These equivalences are not exploited by current state of the art neural agents which instead must relearn similar information, thereby wasting computing time. Group equivariant CNNs in existing work create networks which can exploit symmetries to improve learning, however, they lack the expressiveness to correctly reflect the move embeddings necessary for games. We introduce Finite Group Neural Networks (FGNNs), a method for creating agents with an innate understanding of these board positions. FGNNs are shown to improve the performance of networks playing checkers (draughts), and can be easily adapted to other games and learning problems. Additionally, FGNNs can be created from existing network architectures. These include, for the first time, those with skip connections and arbitrary layer types. We demonstrate that an equivariant version of U-Net (FGNN-U-Net) outperforms the unmodified network in image segmentation.

Taming the AI beast


What King Kong in all its remakes can teach us about AI strategy for business. Why you need a diversity of thinking, including sceptics, and why learning about AI and its implications is a survival hint for the adventure. I am old enough to remember, when the 1976 version was the NEW King Kong and we all marveled at the advances in special effects since the original 1933 classic. Of course the new, new version (15 years old already) takes another technological leap forward. What I find captivating about King Kong, however, is not its special effects.

GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

An imprisoned bishop Highly Evolved Leela vs Mighty Stockfish TCEC Season 17 Rd 34


FIDE CM Kingscrusher goes over a game featuring An imprisoned bishop Highly Evolved Leela vs Mighty Stockfish TCEC Season 17 Rd 34 Play turn style chess at FIDE CM Kingscrusher goes over amazing games of Chess every day, with a focus recently on chess champions such as Magnus Carlsen or even games of Neural Networks which are opening up new concepts for how chess could be played more effectively. The Game qualities that kingscrusher looks for are generally amazing games with some awesome or astonishing features to them. Many brilliant games are being played every year in Chess and this channel helps to find and explain them in a clear way. There are classic games, crushing and dynamic games. There are exceptionally elegant games.

Weighing the Trade-Offs of Explainable AI


In 1997, IBM supercomputer Deep Blue made a move against chess champion Garry Kasparov that left him stunned. The computer's choice to sacrifice one of its pieces seemed so inexplicable to Kasparov that he assumed it was a sign of the machine's superior intelligence. Shaken, he went on to resign his series against the computer, even though he had the upper hand. Fifteen years later, however, one of Deep Blue's designers revealed that fateful move wasn't the sign of advanced machine intelligence -- it was the result of a bug. Today, no human can beat a computer at chess, but the story still underscores just how easy it is to blindly trust AI when you don't know what's going on.

Why The Retirement Of Lee Se-Dol, Former 'Go' Champion, Is A Sign Of Things To Come


South Korean professional Go player Lee Se-Dol after the match against Google's artificial ... [ ] intelligence program, AlphaGo on March 10, 2016 in Seoul, South Korea. In May 1997, IBM's Deep Blue supercomputer defeated the reigning world chess champion, Garry Kasparov, in an official match under tournament conditions. Fast forward to 2011, IBM extended development in machine learning, natural language processing, and information retrieval to build Watson, a system capable of defeating two highly decorated Jeopardy champions: Brad Rutter and Ken Jennings. The progress of gaming innovation in the field of artificial intelligence was swift, but it wasn't until the introduction of Google DeepMind's AlphaGo in 2016 that things started to change dramatically. The AlphaGo supercomputer tackled the notion that Go, an ancient Chinese board game invented thousands of years ago, was unsolvable due to a near limitless combination of moves that a player can execute.

Artificial Intelligence, Deep Learning, and How it Applies to Entertainment


In 1955, computer scientist John McCarthy coined the term artificial intelligence. Just five years before, English Mathematician Alan Turing had posed the question, "Can Machines Think?" Turing proposed a test: could a computer be built which is indistinguishable from a human? This test, often referred to as the Turing Test, has sparked the imagination of AI researchers ever since and been a key idea in the field. In the late 1990s artificial intelligence made its mark again, when IBM's Deep Blue beat the world chess champion Gary Kasparov. Since then, advances in computing power and data accumulation have led to a proliferation of new technologies driven by artificial intelligence.