Is it still cool to memorize a lot of stuff? Is there even a reason to memorize anything? Having a lot of information in your head was maybe never cool in the sexy-cool sense, more in the geeky-cool or class-brainiac sense. But people respected the ability to rattle off the names of all the state capitals, or to recite the periodic table. It was like the ability to dunk, or to play the piano by ear--something the average person can't do.
Artificial intelligence (AI) is creeping into our everyday lives, often without us realizing it. Today, AI can be found in the digital assistants we use such as Apple's (NASDAQ:AAPL) Siri and Amazon's (NASDAQ:AMZN) Alexa to check our schedules and search for things on the internet; in the cars we own that now park themselves as they are able to recognize space around the vehicle; and in the small robots we use to clean our houses, such as the Roomba vacuum. Artificial intelligence is becoming more a part of our lives all the time, and will only grow in importance in coming years. In the not too distant future, AI will influence everything from how we shop for groceries to how diseases are diagnosed and treated by doctors. It all adds up to a fast growing market.
As she met her fellow captains and competitors, all multiweek winners on the game show (including me), she was surprised how familiar everyone seemed to be with each other. Back in 2014, when she made her first appearance, "I didn't know a single person who had ever been on the show," Julia told me. But this time, she marveled, "everyone else seems to have known each other, either personally or by reputation, for decades." They shared years of experience on Jeopardy's secret farm team: quiz bowl. Of the 18 "All-Stars" in the tourney, all but Julia and two others had played the academic competition known as quiz bowl in high school or college.
IBM announced Thursday, Jan. 9, 2014 that it's investing over $1 billion to give its Watson cloud computing system its own business division and a new home in the heart of New York City (AP Photo/Seth Wenig, File) Don't technology companies who promote AI as the way forward also have an obligation to retrain our workforce to deal with the coming job disruption? Artificial intelligence, strong and weak, comes with a lot of moral implications. Weak AI (what we have now, Siri, Alexa, Waze, sophisticated IVR systems, etc…) is going to take jobs away from workers. It has been for years, since the very first attempts. If a programmer can predict it, and a computer can do it, eventually companies will stop paying people to do that job.
The Jeopardy Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researcherss, Watson is performing at human expert levels in terms of precision, confidence, and speed at the Jeopardy quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating, and advancing a wide range of algorithmic techniques to rapidly advance the field of QA. The architecture and methodology developed as part of this project has highlighted the need to take a systems-level approach to research in QA, and we believe this applies to research in the broader field of AI. We have developed many different algorithms for addressing different kinds of problems in QA and plan to publish many of them in more detail in the future.
Major advances in Question Answering technology were needed for IBM Watson to play Jeopardy! at championship level -- the show requires rapid-fire answers to challenging natural language questions, broad general knowledge, high precision, and accurate confidence estimates. In addition, Jeopardy! features four types of decision making carrying great strategic importance: (1) Daily Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square when in control of the board; (4) deciding whether to attempt to answer, i.e., "buzz in." Using sophisticated strategies for these decisions, that properly account for the game state and future event probabilities, can significantly boost a player's overall chances to win, when compared with simple "rule of thumb" strategies. This article presents our approach to developing Watson's game-playing strategies, comprising development of a faithful simulation model, and then using learning and Monte-Carlo methods within the simulator to optimize Watson's strategic decision-making. After giving a detailed description of each of our game-strategy algorithms, we then focus in particular on validating the accuracy of the simulator's predictions, and documenting performance improvements using our methods. Quantitative performance benefits are shown with respect to both simple heuristic strategies, and actual human contestant performance in historical episodes. We further extend our analysis of human play to derive a number of valuable and counterintuitive examples illustrating how human contestants may improve their performance on the show.
Most existing Question Answering (QA) systems adopt a type-and-generate approach to candidate generation that relies on a pre-defined domain ontology. This paper describes a type independent search and candidate generation paradigm for QA that leverages Wikipedia characteristics. This approach is particularly useful for adapting QA systems to domains where reliable answer type identification and type-based answer extraction are not available. We present a three-pronged search approach motivated by relations an answer-justifying title-oriented document may have with the question/answer pair. We further show how Wikipedia metadata such as anchor texts and redirects can be utilized to effectively extract candidate answers from search results without a type ontology. Our experimental results show that our strategies obtained high binary recall in both search and candidate generation on TREC questions, a domain that has mature answer type extraction technology, as well as on Jeopardy! questions, a domain without such technology. Our high-recall search and candidate generation approach has also led to high overall QA performance in Watson, our end-to-end system.
Ferrucci, David (IBM T. J. Watson Research Center) | Brown, Eric (IBM T. J. Watson Research Center) | Chu-Carroll, Jennifer (IBM T. J. Watson Research Center) | Fan, James (IBM T. J. Watson Research Center) | Gondek, David (IBM T. J. Watson Research Center) | Kalyanpur, Aditya A. (IBM T. J. Watson Research Center) | Lally, Adam (IBM T. J. Watson Research Center) | Murdock, J. William (IBM T. J. Watson Research Center) | Nyberg, Eric (Carnegie Mellon University) | Prager, John (IBM T. J. Watson Research Center) | Schlaefer, Nico (Carnegie Mellon University) | Welty, Chris (IBM T. J. Watson Research Center)
IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.