Chess is one of the world's most popular games. Its popularity and complexity make it an interesting research domain for artificial intelligence. The number of board positions we can get to from the initial board state is larger than the number of atoms in the universe! Chess playing machines have been the subject of human interest for hundreds of years, but only on the last few decades have they been able to compete with (and beat) the world champions. Chess programs now have their own tournaments.
Try the Evanston RoundTable's free daily and weekend email newsletters – sign up now! By subscribing, you agree to share your email address with us and Mailchimp to receive marketing, updates, and other emails from us. Use the unsubscribe link in those emails to opt out at any time. Championship tournaments for computer chess engines moved from onsite competition to online well before many human tournaments made the move last year in response to the COVID-19 pandemic. In recent years the Top Engine Chess Competition, which has been played virtually since 2010, has become the unofficial world computer chess championship.
Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett's Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman's equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.
Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory, there must be a solution, a right procedure in any position---John von Neumann The advent of computer chess engines based, such as AlphaZero, LCZero and Stockfish 14 NNUE, provides us with the ability to study optimal play. AI chess algorithms are based on pattern matching, efficient search and data-centric methods rather than rules based. Together with an objective functions based on maximising the probability of winning, we can now see what optimal play and strategies look like. One caveat is the black-box nature of these algorithms and lack of insight into the features that are empirically learned from self play.
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
Unlike tic-tac-toe or checkers, in which optimal play leads to a draw, it is not known whether optimal play in chess ends in a win for White, a win for Black, or a draw. But after White moves first in chess, if Black has a double move followed by a double move of White and then alternating play, play is more balanced because White does not always tie or lead in moves. Symbolically, Balanced Alternation gives the following move sequence: After White's (W) initial move, first Black (B) and then White each have two moves in a row (BBWW), followed by the alternating sequence, beginning with W, which altogether can be written as WB/BW/WB/WB/WB... (the slashes separate alternating pairs of moves). Except for reversal of the 3rd and 4th moves from WB to BW, this is the standard chess sequence. Because Balanced Alternation lies between the standard sequence, which favors White, and a comparable sequence that favors Black, it is highly likely to produce a draw with optimal play, rendering chess fairer. This conclusion is supported by a computer analysis of chess openings and how they would play out under Balanced Alternation.
On the other side was a new program called AlphaZero (the "zero" meaning no human knowledge in the loop), a chess engine in some ways very much weaker than Stockfish--powering through just 1/100th as many moves per second as its opponent. The AI engine won the match (winning 28 games and drawing the rest) with dazzling sacrifices, risky moves, and a beautiful style that was completely new to the world of computer chess. British chess grandmaster Matthew Sadler and mathematician and chessmaster Natasha Regan are still piecing together how AlphaZero's strategy works in their new book, Game Changer. We're breaking open two moves in just one of the games to show the aggressive style, what it does, and what humans can learn from our new chess champion. By move 42, AlphaZero has sacrificed even more pawns, and is marching another poor, disposable sucker toward oblivion.
I have come to the personal conclusion that while all artists are not chess players, all chess players are artists. Originally called Chaturanga, the game was set on an 8x8 Ashtāpada board and shared two key fundamental features that still distinguish the game today. Different pieces subject to different rules of movement and the presence of a single king piece whose fate determines the outcome. But it was not until the 15th century, with the introduction of the queen piece and the popularization of various other rules, that we saw the game develop into the form we know today. The emergence of international chess competition in the late 19th century meant that the game took on a new geopolitical importance.
The term Artificial Intelligence was coined 70 years ago as the stuff of fantasy fiction and about 50 years post that nothing much moved. Then, in 1997 like a bolt from the blue, IBM's Deep Blue defeated world chess champion Garry Kasparov 4-2 in a six game series. Since then, machines have beaten humans at far more complex games – Go, Poker, Dota 2. Computing power grew over a trillion times in the last 50 years. Can you name any industry/trend that has evolved by this order of magnitude? The computer that helped navigate Apollo 11's moon landing had the power of two Nintendo consoles. You have a lot more power in your smartphone today.
If you're planning on teaching a computer to play chess, it is often helpful to start off with the building block of the AI, also known as chess board representation. This is a program which is able to keep track of the state of the game as well as to provide the basis for further position evaluation. There are a number of different programming languages, libraries and software applications which are thought to be good for building computer chess programs. Python is usually the most loved language among data scientists, but I decided to write my own chess board representation with PHP from scratch since I spotted an opportunity to do something new in the PHP community. It'd be nice if a chess library like python-chess could be available in PHP too, I thought.
Since IBM's Deep Blue defeated World Chess Champion Garry Kasparov in their 1997 match, chess engines have only increased dramatically in strength and understanding. Today, the best chess engines are an almost incomprehensible 1,000 Elo points stronger than Deep Blue was at that time. A quick Google search for terms such as "Magnus Carlsen versus Stockfish" turns up numerous threads asking if humans can compete against today's top chess engines. The broad consensus seems to be that the very best humans might secure a few draws with the white pieces, but in general, they would lose the vast majority of games and would have no hope of winning any games. I see no reason to disagree with this consensus. Despite the clear superiority of engines, there ARE positions which chess engines don't (and possibly can't) understand that are quite comprehensible for human players.