An artificial intelligence has beaten eight world champions at bridge, a game in which human supremacy has resisted the march of the machines until now. The victory represents a new milestone for AI because in bridge players work with incomplete information and must react to the behaviour of several other players – a scenario far closer to human decision-making. In contrast, chess and Go – in both of which AIs have already beaten human champions – a player has a single opponent at a time and both are in possession of all the information. "What we've seen represents a fundamentally important advance in the state of artificial intelligence systems," said Stephen Muggleton, a professor of machine learning at Imperial College London. French startup NukkAI announced the news of its AI's victory on Friday, at the end of a two-day tournament in Paris.
Remember in 2017, Elon Musk said that artificial intelligence would replace humanity in the next five years? While working on artificial intelligence for Tesla cars, he concluded that society had approached the moment when artificial intelligence could become significantly smarter than people. "People should not underestimate the power of the computer,'' Musk said. "This is pride and an obvious mistake." He must know what he's talking about, being one of the early investors of DeepMind, a Google subsidiary that developed an AI that could beat humans at Go and chess. AI is really good at many "human" tasks -- diagnosing diseases, translating languages, and serving customers.
That isn't what happened, of course. Indeed, when we look back now, 25 years later, we can see that Deep Blue's victory wasn't so much a triumph of AI but a kind of death knell. It was a high-water mark for old-school computer intelligence, the laborious handcrafting of endless lines of code, which would soon be eclipsed by a rival form of AI: the neural net--in particular, the technique known as "deep learning." For all the weight it threw around, Deep Blue was the lumbering dinosaur about to be killed by an asteroid; neural nets were the little mammals that would survive and transform the planet. Yet even today, deep into a world chock-full of everyday AI, computer scientists are still arguing whether machines will ever truly "think."
Modern chess is the culmination of centuries of experience, as well as an evolutionary sequence of rule adjustments from its inception in the 6th century to the modern rules we know today.17 While classical chess still captivates the minds of millions of players worldwide, the game is anything but static. Many variants have been proposed and played over the years by enthusiasts and theorists.8,20 They continue the evolutionary cycle by altering the board, piece placement, or the rules--offering players "something subtle, sparkling, or amusing which cannot be done in ordinary chess."1 Technological progress is the new driver of the evolutionary cycle. Chess engines increase in strength, and players have access to millions of computer games and volumes of opening theory.
Hongliang Xin, an associate professor of chemical engineering in the College of Engineering, and his collaborators have devised a new artificial intelligence framework that can accelerate discovery of materials for important technologies, such as fuel cells and carbon capture devices. Titled "Infusing theory into deep learning for interpretable reactivity prediction," their paper in the journal Nature Communications details a new approach called TinNet--short for theory-infused neural network--that combines machine-learning algorithms and theories for identifying new catalysts. Catalysts are materials that trigger or speed up chemical reactions. TinNet is based on deep learning, also known as a subfield of machine learning, which uses algorithms to mimic how human brains work. The 1996 victory of IBM's Deep Blue computer over world chess champion Garry Kasparov was one of the first advances in machine learning.
Hongliang Xin, an associate professor of chemical engineering in the College of Engineering, and his collaborators have devised a new artificial intelligence framework that can accelerate discovery of materials for important technologies, such as fuel cells and carbon capture devices. Titled "Infusing theory into deep learning for interpretable reactivity prediction," their paper in the journal Nature Communications details a new approach called TinNet -- short for theory-infused neural network -- that combines machine-learning algorithms and theories for identifying new catalysts. Catalysts are materials that trigger or speed up chemical reactions. TinNet is based on deep learning, also known as a subfield of machine learning, which uses algorithms to mimic how human brains work. The 1996 victory of IBM's Deep Blue computer over world chess champion Garry Kasparov was one of the first advances in machine learning.
Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett's Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman's equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
Human preference or taste within any domain is usually a difficult thing to identify or predict with high probability. In the domain of chess problem composition, the same is true. Traditional machine learning approaches tend to focus on the ability of computers to process massive amounts of data and continuously adjust 'weights' within an artificial neural network to better distinguish between say, two groups of objects. Contrasted with chess compositions, there is no clear distinction between what constitutes one and what does not; even less so between a good one and a poor one. We propose a computational method that is able to learn from existing databases of 'liked' and 'disliked' compositions such that a new and unseen collection can be sorted with increased probability of matching a solver's preferences. The method uses a simple 'change factor' relating to the Forsyth-Edwards Notation (FEN) of each composition's starting position, coupled with repeated statistical analysis of sample pairs from both databases. Tested using the author's own collections of computer-generated chess problems, the experimental results showed that the method was able to sort a new and unseen collection of compositions such that, on average, over 70% of the preferred compositions were in the top half of the collection. This saves significant time and energy on the part of solvers as they are likely to find more of what they like sooner. The method may even be applicable to other domains such as image processing because it does not rely on any chess-specific rules but rather just a sufficient and quantifiable 'change' in representation from one object to the next.