IBM's Watson system beat two former Jeopardy! game show champions on television February 14-16, 2011. Details of the Match in the NY Times story Computer Wins on Jeopardy!: Trivial, It's Not. (Feb. 17, 2011).
Yago was one of the first knowledge bases, developed by scientists at the Max Planck Institute for Informatics in Saarbrücken and the Télécom ParisTech in Paris. "If you, for example, do a internet search for the German term'Allianz', this is merely a collection of letters for the search engine," explains Professor Gerhard Weikum, Scientific Director at the Max Planck Institute for Computer Science in Saarbrücken. Today, Yago is a collaboration of the Max Planck Institute, the Télécom ParisTech University, where Suchanek now holds a professorship, and the Max Planck spin-off, Ambiverse. Last week, the researchers behind Yago were awarded the Prominent Paper Award which recognizes outstanding papers published in the Artificial Intelligence Journal (AIJ) that are exceptional in their significance and impact over the past 5 years.
Since his appearance on the game show in 2011, IBM has expanded Watson's talents, building on the algorithms that allow him to read and derive meaning from natural language. Toronto Western, part of the University Health Network, is the first hospital in Canada to use Watson for research in Parkinson's, a neurological disorder. The centre has a track record of running clinical trials for off-label drug use, which means taking a drug approved for treatment of one condition and repurposing it for another. Visanji, 39, is a scientist at the hospital's Morton and Gloria Shulman Movement Disorders Centre, the country's biggest Parkinson's clinic.
In this episode of the Data Show, I spoke with David Ferrucci, founder of Elemental Cognition and senior technologist at Bridgewater Associates. I can look through these patterns much more efficiently because machines are a lot faster, and I can generalize those patterns using machine learning techniques. So, machines got a lot faster, huge volumes of data became available, and machine learning techniques allowed me to discover patterns in that language data more rapidly and more effectively than ever before. Watson analyzed how words occur together in questions and passages, and it came up with an approximation: this phrase might mean that phrase, and if I see this phrase as part of the question and this phrase as part of a potential answer passage, since they might mean the same thing, then such connections might help formulate an answer.
It takes in several key areas, including image, facial, text and speech recognition, and hopes to implement the technology into its computer operating systems and smartphone software. Now, IBM is partnered with over 300 firms from all fields, including Twitter, Wellpoint (Medical Insurance) and Chatterbox (Children's Technology), to use Watson's Natural Language Processing capabilities for their own ends. Produced by Baidu, Google's Chinese equivalent, Minwa is their landmark project, and mirrors the IBM Watson model, with over 72 processors and 144 graphics processors. Not unlike Watson, Minwa's Natural Language Processing capabilities are some of the most impressive in the world, but the whole project was shrouded in disrepute after the most recent Image Classification Challenge, in which Minwa posted a 4.58% error rate, better than its competitors from Google and Microsoft, and better than the average human rate of 5%.
The number of jobs where it's possible for human labor to add value is shrinking, and new jobs that require human labor are becoming more and more rare. Is it the responsibility of companies that develop or deploy automated systems to retrain the workers they replace? This isn't simply automation; it's a new life form, which raises civil rights issues and the brand new problem of dealing with alien life forms (and they will be alien - completely different priorities from any previous known life form, and the only life form we know of that didn't result from the same kind of evolutionary process we did). All of the questions around strong AI are a lot bigger and have a lot bigger consequences than whether we'll have to retrain humans to do different jobs.
Written by David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show, Jeopardy. Our results strongly suggest that DeepQA is an effective and extensible architecture that can be used as a foundation for combining, deploying, evaluating, and advancing a wide range of algorithmic techniques to rapidly advance the field of question answering (QA). With QA in mind, we settled on a challenge to build a computer system, called Watson,1 which could compete at the human champion level in real time on the American TV quiz show, Jeopardy. Meeting the Jeopardy Challenge requires advancing and incorporating a variety of QA technologies including parsing, question classification, question decomposition, automatic source acquisition and evaluation, entity and relation detection, logical form generation, and knowledge representation and reasoning.