Providing a formal definition of Intelligence can be a quite intimidating task. In fact, no common agreement about this topic has been reached so far. Since the beginning of the human history, different definitions of intelligence have been proposed and these varied depending on the historical time and culture. For example, in a society in which language and communication skills play an important role, an individual donated of these kinds of skills might be recognised as to be "more intelligent" than others. In the meantime, in a society in which numerical skills are valued most other individuals might be regarded as to be "more intelligent".
Since the beginnings of artificial intelligence, researchers have long sought to test the intelligence of machine systems by having them play games against humans. It is often thought that one of the hallmarks of human intelligence is the ability to think creatively, consider various possibilities, and keep a long-term goal in mind while making short-term decisions. If computers can play difficult games just as well as humans then surely they can handle even more complicated tasks. From early checkers-playing bots developed in the 1950s to today's deep learning-powered bots that can beat even the best players in the world at games like chess, Go and DOTA, the idea of machines that can find solutions to puzzles is as old as AI itself, if not older. As such, it makes sense that one of the core patterns of AI that organizations develop is the goal-driven systems pattern.
Since the beginnings of artificial intelligence, researchers have long sought to test the intelligence of machine systems by having them play games against humans. It is often thought that one of the hallmarks of human intelligence is the ability to think creatively, consider various possibilities, and keep a long-term goal in mind while making short-term decisions. If computers can play difficult games as well as humans then surely they can handle even more complicated tasks. From early checkers-playing bots developed in the 1950s to today's deep learning-powered bots that can beat even the best players at games like chess, Go and DOTA, the idea of machines that can find solutions to puzzles is as old as AI itself, if not older. As such, it makes sense that one of the core patterns of AI that organizations develop is the goal-driven systems pattern.
In this paper we introduce a new algorithm for updating the parameters of a heuristic evaluation function, by updating the heuristic towards the values computed by an alpha-beta search. Our algorithm differs from previous approaches to learning from search, such as Samuels checkers player and the TD-Leaf algorithm, in two key ways. First, we update all nodes in the search tree, rather than a single node. Second, we use the outcome of a deep search, instead of the outcome of a subsequent search, as the training signal for the evaluation function. We implemented our algorithm in a chess program Meep, using a linear heuristic function.
In 1962 Arthur Samuel shocked the world. He built a computer that could challenge then- reigning checkers champion, Robert Nealy. The machine won, but it wasn't the triumph alone that grabbed headlines. It was the software behind the victory that would change the world. Rather than programming the 500 quintillion 2 potential scenarios from a checkerboard into his computer, he instructed the device to react based on games it had played in the past.
One of the pioneers of artificial intelligence, economist Herbert Simon, said in the 50s of the last century that "in the visible future, the range of problems that machines can handle will match that of the human mind." At that time, it didn't seem like such a naive forecast: it had already been possible to make a computer play checkers and learn from its own mistakes. But Simon died in 2001 without having witnessed that technology that had seemed so close. Although we might think that if AI has already been able to overcome in very complex fields (such as playing Go) or show skills that we have never had (such as detecting the sex of a person through a photo of the interior from your eye), it should be easy to copy our most ordinary skills, the small day-to-day actions we usually carry out unconsciously. However, these skills (tying a shoe tie, moving with agility on two legs, being able not to collide while moving on the street and thinking about anything else, etc.) are not simple because they are an intrinsic part of who we are: as any physiotherapist could remind us, the ability to walk is not easy to teach even humans.
This is his seminal paper originally published in 1959 where Samuel sets out to build a program that can learn to play the game of checkers. Checkers is an extremely complex game - as a matter of fact the game has roughly 500 billion billion possible positions - that using a brute force only approach to solve it is not satisfactory. Samuel's program was based on Claude Shannon's minimax strategy to find the best move from a given current position. In this paper he describes how a machine could look ahead "by evaluating the resulting board positions much as a human player might do".
When you think of AI or machine learning you may draw up images of AlphaZero or even some science fiction reference such as HAL-9000 from 2001: A Space Odyssey. However, the true forefather, who set the stage for all of this, was the great Arthur Samuel. Samuel was a computer scientist, visionary, and pioneer, who wrote the first checkers program for the IBM 701 in the early 1950s. His program, "Samuel's Checkers Program", was first shown to the general public on TV on February 24th, 1956, and the impact was so powerful that IBM stock went up 15 points overnight (a huge jump at that time). This program also helped set the stage for all the modern chess programs we have come to know so well, with features like look-ahead, an evaluation function, and a mini-max search that he would later develop into alpha-beta pruning.
The evolution of artificial intelligence (AI) grew with the complexity of the languages available for development. In 1959, Arthur Samuel developed a self-learning checkers program at IBM on an IBM 701 computer using the native instructions of the machine (quite a feat given search trees and alpha-beta pruning). But today, AI is developed using various languages, from Lisp to Python to R. This article explores the languages that evolved for AI and machine learning. The programming languages that are used to build AI and machine learning applications vary. Each application has its own constraints and requirements, and some languages are better than others in particular problem domains.