IBM's Deep Blue wasn't supposed to defeat Chess grandmaster Gary Kasparov when the two of them had their 1997 rematch. Computer experts of the time said machines would never beat us at strategy games because human ingenuity would always triumph over brute-force analysis. After Kasparov's loss, the experts didn't miss a beat. They said Chess was too easy and postulated that machines would never beat us at Go. Champion Lee Sedol's loss against DeepMind's AlphaGo proved them wrong there. Then the experts said AI would never beat us at games where strategy could be overcome by human creativity, such as poker.
Garry Kasparov dominated chess until he was beaten by an IBM supercomputer called Deep Blue in 1997. The event made "man loses to computer" headlines the world over. Kasparov recently returned to the ballroom of the New York hotel where he was defeated for a debate with AI experts. Wired's Will Knight was there for a revealing interview with perhaps the greatest human chess player the world has ever known. "I was the first knowledge worker whose job was threatened by a machine," says Kasparov, something he foresees coming for us all.
It was a war of titans you likely never heard about. One year ago, two of the world's strongest and most radically different chess engines fought a pitched, 100-game battle to decide the future of computer chess. On one side was Stockfish 8. This world-champion program approaches chess like dynamite handles a boulder--with sheer force, churning through 60 million potential moves per second. Of these millions of moves, Stockfish picks what it sees as the very best one--with "best" defined by a complex, hand-tuned algorithm co-designed by computer scientists and chess grandmasters.
Garry Kasparov is perhaps the greatest chess player in history. For almost two decades after becoming world champion in 1985, he dominated the game with a ferocious style of play and an equally ferocious swagger. Outside the chess world, however, Kasparov is best known for losing to a machine. In 1997, at the height of his powers, Kasparov was crushed and cowed by an IBM supercomputer called Deep Blue. The loss sent shock waves across the world, and seemed to herald a new era of machine mastery over man.
Before IBM's Deep Blue computer program defeated world champion Garry Kasparov in chess in 1997, ... [ ] many AI pundits believed that machines would never possess the creativity required to rival humans at the game. Years ago, Marvin Minsky coined the phrase "suitcase words" to refer to terms that have a multitude of different meanings packed into them. He gave as examples words like consciousness, morality and creativity. "Artificial intelligence" is a suitcase word. Commentators today use the phrase to mean many different things in many different contexts.
In this paper we introduce a new algorithm for updating the parameters of a heuristic evaluation function, by updating the heuristic towards the values computed by an alpha-beta search. Our algorithm differs from previous approaches to learning from search, such as Samuels checkers player and the TD-Leaf algorithm, in two key ways. First, we update all nodes in the search tree, rather than a single node. Second, we use the outcome of a deep search, instead of the outcome of a subsequent search, as the training signal for the evaluation function. We implemented our algorithm in a chess program Meep, using a linear heuristic function.
The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding'scores' for each object (e.g. In this paper, we propose a novel iterative rank aggregation algorithm for discovering scores for objects from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with edges present between two objects if they are compared; the scores turn out to be the stationary probability of this random walk.
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Machine learning, the subset of artificial intelligence that teaches computers to perform tasks through examples and experience, is a hot area of research and development. Many of the applications we use daily use machine learning algorithms, including AI assistants, web search and machine translation. Your social media news feed is powered by a machine learning algorithm. The recommended videos you see on YouTube and Netflix are the result of a machine learning model.
Is today's schooling preparing your child to be a creator? Fluid Intelligence is the ability to solve problems one has never faced before. We believe this is the single most important ability that will make a huge difference in the life of any child. The most important thinking skills such as decision making, problem solving and logical reasoning is what helps build fluid intelligence. We are specialists in working with children from a very young age to develop their fluid intelligence for a life long and ever lasting impact.
I've watched lots of companies attempt to deploy machine learning -- some succeed wildly, and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Is it harder to beat Kasparov at chess or pick up and physically move the chess pieces? Computers beat the world champion chess player over twenty years ago, but reliably grasping and lifting objects is still an unsolved research problem. Humans are not good at evaluating what will be hard for AI and what will be easy.