A new signature table technique is described together with an improved book learning procedure which is thought to be much superior to the linear polynomial method described earlier. Full use is made of the so called âalpha-betaâ pruning and several forms of forward pruning to restrict the spread of the move tree and to permit the program to look ahead to a much greater depth than it other- wise could do. While still unable to outplay checker masters, the programâs playing ability has been greatly improved.See also:IEEE XploreAnnual Review in Automatic Programming, Volume 6, Part 1, 1969, Pages 1–36Some Studies in Machine Learning Using the Game of CheckersIBM J of Research and Development ll, No.6, 1967,601
In this paper we present our approach of improving the traditional alpha-beta search process for strategic board games by modifying the method in two ways: 1) forgoing the evaluation of leaf nodes that are not terminal states and 2) employing a utility table that stores the utility for subsets of board configurations. In this paper we concentrate our efforts on the game of Connect Four. Our results have shown significant speedup, as well as a framework that relaxes common agent assumptions in game search. In addition, it allows game designers to easily modify the agent's strategy by changing the goal from dominance to interaction.
We like our machines to feel human, even if they don't look it. The pulsing on and off of the power light on an Apple computer when it is "sleeping" is reassuring. Even the red light of HAL in 2001: A Space Odyssey gave an assurance that the machine was alive, rather than a faceless menace. One of the pioneers of computing, Alan Turing, was amongst the first to address the challenge of artificial intelligence and gives his name to the Turing test for a "machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human." Learning from our mistakes makes us human.
The game of Go played between a DeepMind computer program and a human champion created an existential crisis of sorts for Marcus du Sautoy, a mathematician and professor at Oxford University. "I've always compared doing mathematics to playing the game of Go," he says, and Go is not supposed to be a game that a computer can easily play because it requires intuition and creativity. So when du Sautoy saw DeepMind's AlphaGo beat Lee Sedol, he thought that there had been a sea change in artificial intelligence that would impact other creative realms. He set out to investigate the role that AI can play in helping us understand creativity, and ended up writing The Creativity Code: Art and Innovation in the Age of AI (Harvard University Press). The Verge spoke to du Sautoy about different types of creativity, AI helping humans become more creative (instead of replacing them), and the creative fields where artificial intelligence struggles most.
Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities. The team from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, used a branch of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Using that method, the team achieved the maximum score possible of 999,990. Doina Precup, an associate professor of computer science at McGill University in Montreal said that's a significant achievement among AI researchers, who have been using various videogames to test their systems but have found Ms. Pac-Man among the most difficult to crack. But Precup said she was impressed not just with what the researchers achieved but with how they achieved it.