Goto

Collaborating Authors

Results


Computer chess: how the ancient game revolutionised AI

The Guardian

Tue 19 May 2020 06.14 EDT Last modified on Tue 19 May 2020 06.16 EDT When legendary chess grandmaster Garry Kasparov found himself beaten by IBM's Deep Blue supercomputer, it was seen as a seminal moment in the evolution of artificial intelligence. A road trodden by war heroes and student researchers alike, whose singular desire to create a program that could beat the very best in the world would shape an entire science. Early origins Chess lends itself well to computer programming. Where other games can depend more on gut instinct or physical skill, chess is a game of strict binary rules – a move is either correct or it isn't. It's a game where multiple permutations, strategies and responses to moves and gambits could all be pre-programmed.


Humans and AI: Future Best Friends

#artificialintelligence

It is not that hard to believe, how just two decades ago Deep Blue a computer beat a chess grandmaster Gary Kasparov. AI is enhancing itself and is becoming better at numerous "human" jobs -- diagnosing disease, translating languages, providing customer service -- and it's improving fast. This is raising reasonable fears amongst workers and upcoming students. According to The Guardian, 76% of Americans fear that their job will be lost to AI. While it's speculated AI will take over 1.8 million human jobs by the year 2020, however, the technology is also expected to create a 2.3 million new kinds of jobs, many of which will involve the collaboration between humans and AI.


No, You Won't Work Alongside Robots

#artificialintelligence

In 1997, IBM's Deep Blue defeated the reigning world chess champion Garry Kasparov. The world was in shock. It seemed computers, thus far thought to be little more than glorified calculators, had finally intruded upon the human domain of imagination and creativity. The worry was in vain. Deep Blue had no capacity for ingenuity.


The Games That AI Won

#artificialintelligence

Some tasks that AI does are actually not impressive. Think about your camera recognizing and auto-focusing on faces in pictures. That technology has been around since 2001, and it doesn't tend to excite people. Well, because you can do that too, you can focus your eyes on someone's face very easily. In fact, it's so easy you don't even know how you do it.


How AI Learns to Play Games

#artificialintelligence

Over the past few years, we've seen computer programs winning games which we believe humans were unbeatable. This belief held considering this games had so many possible moves for a given position that would be impossible to computer programs calculate all of then and choose the best ones. However, in 1997 the world witnessed what otherwise was considered impossible: the IBM Deep Blue supercomputer won a six game chess match against Gary Kasparov, the world champion of that time, by 3.5 – 2.5. Such victory would only be achieved again when DeepMind's AlphaGo won a five game Go match against Lee Sedol, 18 times world champion, by a 4-1 score. The IBM Deep Blue team relied mostly in brute force and computation power as their strategy to win the matches.


The Moravec Paradox -

#artificialintelligence

"Focusing on your strengths is required for peak performance, but improving your weaknesses has the potential for the greatest gains. This is true for athletes, executives and entire companies." As parents, we get to see our kids growing, trying, falling and learning in the process. First steps, first words, first drawings leave us amazed. As our children become adults, they continue to learn, choose a career and become athletes, surgeons, plane pilots, journalists, teachers… and we're proud.


Weighing the Trade-Offs of Explainable AI

#artificialintelligence

In 1997, IBM supercomputer Deep Blue made a move against chess champion Garry Kasparov that left him stunned. The computer's choice to sacrifice one of its pieces seemed so inexplicable to Kasparov that he assumed it was a sign of the machine's superior intelligence. Shaken, he went on to resign his series against the computer, even though he had the upper hand. Fifteen years later, however, one of Deep Blue's designers revealed that fateful move wasn't the sign of advanced machine intelligence -- it was the result of a bug. Today, no human can beat a computer at chess, but the story still underscores just how easy it is to blindly trust AI when you don't know what's going on.


Chess grandmaster Gary Kasparov predicts AI will disrupt 96 percent of all jobs

#artificialintelligence

IBM's Deep Blue wasn't supposed to defeat Chess grandmaster Gary Kasparov when the two of them had their 1997 rematch. Computer experts of the time said machines would never beat us at strategy games because human ingenuity would always triumph over brute-force analysis. After Kasparov's loss, the experts didn't miss a beat. They said Chess was too easy and postulated that machines would never beat us at Go. Champion Lee Sedol's loss against DeepMind's AlphaGo proved them wrong there. Then the experts said AI would never beat us at games where strategy could be overcome by human creativity, such as poker.


AI Is Now the Undisputed Champion of Computer Chess

#artificialintelligence

It was a war of titans you likely never heard about. One year ago, two of the world's strongest and most radically different chess engines fought a pitched, 100-game battle to decide the future of computer chess. On one side was Stockfish 8. This world-champion program approaches chess like dynamite handles a boulder--with sheer force, churning through 60 million potential moves per second. Of these millions of moves, Stockfish picks what it sees as the very best one--with "best" defined by a complex, hand-tuned algorithm co-designed by computer scientists and chess grandmasters.


Leveraging Rationales to Improve Human Task Performance

arXiv.org Artificial Intelligence

Machine learning (ML) systems across many application areas are increasingly demonstrating performance that is beyond that of humans. In response to the proliferation of such models, the field of Explainable AI (XAI) has sought to develop techniques that enhance the transparency and interpretability of machine learning methods. In this work, we consider a question not previously explored within the XAI and ML communities: Given a computational system whose performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human? We study this question in the context of the game of Chess, for which computational game engines that surpass the performance of the average player are widely available. We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods, which we evaluate with a multi-day user study against two baselines. The results show that our approach produces rationales that lead to statistically significant improvement in human task performance, demonstrating that rationales automatically generated from an AI's internal task model can be used not only to explain what the system is doing, but also to instruct the user and ultimately improve their task performance.