Goto

Collaborating Authors

Results


The Secret Farm Team for em Jeopardy! /em Players

Slate

As she met her fellow captains and competitors, all multiweek winners on the game show (including me), she was surprised how familiar everyone seemed to be with each other. Back in 2014, when she made her first appearance, "I didn't know a single person who had ever been on the show," Julia told me. But this time, she marveled, "everyone else seems to have known each other, either personally or by reputation, for decades." They shared years of experience on Jeopardy's secret farm team: quiz bowl. Of the 18 "All-Stars" in the tourney, all but Julia and two others had played the academic competition known as quiz bowl in high school or college.


Eric Trump got a 'Jeopardy!' question correct, but that didn't convince people he was smart

Mashable

Last night was full of surprises. Surprise number two: Eric Trump can successfully answer a Jeopardy! SEE ALSO: Just 13 very upsetting photos of Donald Trump Jr. Not only does this famously intelligent person get the answer correct (brother in law), he also answers the question in the form of a question. He goes on to add suggestive emoji of a fist punching the American flag.


Why Watson's win doesn't make humanity obsolete -- yet

AITopics Original Links

Despite Watson's tremendous performance, it struggles with simple questions most humans can answer easily. IBM's Watson computer doesn't really "think" anything; it struggles with simple questions Most of the clues on "Jeopardy!" IBM's Watson computer doesn't really "think" anything; it struggles with simple questions Watson's eventual commercial incarnation will be as a tool, not a human replacement Despite Watson's tremendous performance, the Final Jeopardy question at the end of Tuesday night's airing revealed the Achilles' heel that computer scientists have known all along: Watson doesn't really "think" anything, and it struggles with simple questions that most humans can answer without a second thought. "This gives the Watson algorithm a great deal of'traction.' To us viewing the show, it's impressive if it correctly knows that Franz Schubert's birth date was January 31, 1797.


The AI Behind Watson -- The Technical Article

#artificialintelligence

The Jeopardy Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researcherss, Watson is performing at human expert levels in terms of precision, confidence, and speed at the Jeopardy quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating, and advancing a wide range of algorithmic techniques to rapidly advance the field of QA. The architecture and methodology developed as part of this project has highlighted the need to take a systems-level approach to research in QA, and we believe this applies to research in the broader field of AI. We have developed many different algorithms for addressing different kinds of problems in QA and plan to publish many of them in more detail in the future.


Leveraging Wikipedia Characteristics for Search and Candidate Generation in Question Answering

AAAI Conferences

Most existing Question Answering (QA) systems adopt a type-and-generate approach to candidate generation that relies on a pre-defined domain ontology. This paper describes a type independent search and candidate generation paradigm for QA that leverages Wikipedia characteristics. This approach is particularly useful for adapting QA systems to domains where reliable answer type identification and type-based answer extraction are not available. We present a three-pronged search approach motivated by relations an answer-justifying title-oriented document may have with the question/answer pair. We further show how Wikipedia metadata such as anchor texts and redirects can be utilized to effectively extract candidate answers from search results without a type ontology. Our experimental results show that our strategies obtained high binary recall in both search and candidate generation on TREC questions, a domain that has mature answer type extraction technology, as well as on Jeopardy! questions, a domain without such technology. Our high-recall search and candidate generation approach has also led to high overall QA performance in Watson, our end-to-end system.


Building Watson: An Overview of the DeepQA Project

AI Magazine

IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.