Besides being exactly a year before India conducted its second nuclear tests, it is the day that IBM's Deep Blue computer made Gary Kasparov, the human world champion, concede defeat in less than 20 moves in the 6th Game of Chess that they played together. Kasparov reflect today, 20 years later, in his book "Deep Thinking – where machine intelligence ends and human creativity begins", even if he would have won, it was just a matter of time when computers would have started winning. The supporters of Artificial Intelligence – called the hard AI – were delighted then – proclaiming a day, not in too far in future ahead, when machines will be able to replicate the human decision-making process. In contrast, soft AI proponents believe that intelligence cannot be created artificially. It can at best be simulated at an appropriate level of detail to create solutions for some of human decision-making problems.
Computers have long been good at carrying out assigned tasks but terrible at learning things on their own. Thus all the excitement around "neural networks," a breakthrough artificial intelligence technique that mimics the structure of the human brain and allows machines to learn things independently. Tech giants are using neural networks to do some pretty impressive things. Microsoft is using them to make instant translation real for Skype. Google's artificial intelligence learned Atari video games and then mastered the ancient game of Go, with its AlphaGo program beating the human champion Lee Sedol 4 to 1.
It's been more than 20 years since IBM's Deep Blue won its first match against world chess champion Garry Kasparov, marking the first time an artificial intelligence machine defeated a reigning champion. Deep Blue eventually lost the match 2-4, but evened the score in a May 1997 rematch. Fourteen years later, AI made its television debut in grand style, when IBM's Watson took down a pair of former "Jeopardy!" In milliseconds, the machine culled the most probable answer to each question from more than 200 million pages of content, including the complete Wikipedia catalog. Now, Google's AI system, AlphaGo, is making cognitive computing history.
In this paper we present our approach of improving the traditional alpha-beta search process for strategic board games by modifying the method in two ways: 1) forgoing the evaluation of leaf nodes that are not terminal states and 2) employing a utility table that stores the utility for subsets of board configurations. In this paper we concentrate our efforts on the game of Connect Four. Our results have shown significant speedup, as well as a framework that relaxes common agent assumptions in game search. In addition, it allows game designers to easily modify the agent's strategy by changing the goal from dominance to interaction.
A new signature table technique is described together with an improved book learning procedure which is thought to be much superior to the linear polynomial method described earlier. Full use is made of the so called âalpha-betaâ pruning and several forms of forward pruning to restrict the spread of the move tree and to permit the program to look ahead to a much greater depth than it other- wise could do. While still unable to outplay checker masters, the programâs playing ability has been greatly improved.See also:IEEE XploreAnnual Review in Automatic Programming, Volume 6, Part 1, 1969, Pages 1–36Some Studies in Machine Learning Using the Game of CheckersIBM J of Research and Development ll, No.6, 1967,601