"Questions are asked and answered every day. Question answering (QA) technology aims to deliver the same facility online. It goes further than the more familiar search based on keywords (as in Google, Yahoo, and other search engines), in attempting to recognize what a question expresses and to respond with an actual answer. This simplifies things for users in two ways. First, questions do not often translate into a simple list of keywords. ...Second, QA takes responsibility for providing answers, rather than a searchable list of links to potentially relevant documents (web pages), highlighted by snippets of text that show how the query matched the documents."
– from Bonnie Webber & Nick Webb. Question Answering. In The Handbook of Computational Linguistics and Natural Language Processing. Alexander Clark, Chris Fox, Shalom Lappin (Eds.). Wiley, 2010.
In this course I am going to introduce you to Watson Studio AutoAI by IBM. Artificial Intelligence (AI) and Machine Learning (ML) are two very hot topics nowadays. Experts claim that AI & ML are going to revolutionize the world. This course is designed for those who want to take a short cut to these technologies. Auto AI and Auto ML are new tools that provide methods and processes to make Artificial intelligence and Machine Learning available for non-experts.
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received significant attention throughout the history of NLP research. This primary goal has been studied under different tasks, such as Question Answering (QA) and Textual Entailment (TE). In this thesis, we investigate the NLU problem through the QA task and focus on the aspects that make it a challenge for the current state-of-the-art technology. This thesis is organized into three main parts: In the first part, we explore multiple formalisms to improve existing machine comprehension systems. We propose a formulation for abductive reasoning in natural language and show its effectiveness, especially in domains with limited training data. Additionally, to help reasoning systems cope with irrelevant or redundant information, we create a supervised approach to learn and detect the essential terms in questions. In the second part, we propose two new challenge datasets. In particular, we create two datasets of natural language questions where (i) the first one requires reasoning over multiple sentences; (ii) the second one requires temporal common sense reasoning. We hope that the two proposed datasets will motivate the field to address more complex problems. In the final part, we present the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use, such as incompleteness, ambiguity, etc. We apply this framework to prove fundamental limitations for reasoning algorithms. These theoretical results provide extra intuition into the existing empirical evidence in the field.
Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multi-relational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TextWorldsQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TextWorlds for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task.
Wouldn't it be great if an Android app could see and understand its surroundings? Can you imagine how much better its user interface could be if it could look at its users and instantly know their ages, genders, and emotions? Well, such an app might seem futuristic, but it's totally doable today. With the IBM Watson Visual Recognition service, creating mobile apps that can accurately detect and analyze objects in images is easier than ever. In this tutorial, I'll show you how to use it to create a smart Android app that can guess a person's age and gender and identify prominent objects in a photograph.
The alliance is the latest by IBM in a bid to harness Watson's cognitive learning capabilities to benefit millions of college students and professors. The announcement follows a separate agreement announced at the end of June between IBM and Blackboard, and the roll out of an IBM Watson-enabled app for Apple earlier this month, among other initiatives. For Pearson, the alliance represents a chance to combine its global offering of digital learning products with IBM's cognitive learning platform in an effort to give students a more immersive learning experience with their college courses. And it promises to give instructors greater insights about how well students are navigating through their courses. To accomplish that, Watson will essentially ingest and analyze all of Pearson courseware.
We describe a course in which students train an instance of Watson and develop an application that interacts with the trained instance. Additionally, students learn technical information about the Jeopardy! version of Watson and they discuss a future infused with cognitive assistants. In this paper, we provide learning outcomes and course assessment items. We provide detailed course materials and advice for instructors interested in teaching such a course. The advice is in the form of best practices, a description of a successful use case and an evaluation of our experience teaching this course.
We developed a course in which students train an instance of Watson and develop an application that interacts with the trained instance. Additionally, students learn technical in-formation about the Jeopardy! version of Watson and they discuss a future infused with cognitive assistants. In this poster, we justify this course, characterize major assessment items and provide advice on choosing a domain.
Chaudhri, Vinay K. (SRI International) | Cheng, Britte (SRI International) | Overtholtzer, Adam (SRI International) | Roschelle, Jeremy (SRI International) | Spaulding, Aaron (SRI International) | Clark, Peter (Vulcan Inc.) | Greaves, Mark (Pacific Northwest National Laboratory) | Gunning, Dave (Palo Alto Research Center)
Inquire Biology is a prototype of a new kind of intelligent textbook — one that answers students’ questions, engages their interest, and improves their understanding. Inquire Biology provides unique capabilities via a knowledge representation that captures conceptual knowledge from the textbook and uses inference procedures to answer students’ questions. Students ask questions by typing free-form natural language queries or by selecting passages of text. The system then attempts to answer the question and also generates suggested questions related to the query or selection. The questions supported by the system were chosen to be educationally useful, for example: what is the structure of X? compare X and Y? how does X relate to Y? In user studies, students found this question-answering capability to be extremely useful while reading and while doing problem solving. In an initial controlled experiment, community college students using the Inquire Biology prototype outperformed students using either a hardcopy or conventional E-book version of the same biology textbook. While additional research is needed to fully develop Inquire Biology, the initial prototype clearly demonstrates the promise of applying knowledge representation and question-answering technology to electronic textbooks.
Gunning, David (Vulcan, Inc.) | Chaudhri, Vinay K. (SRI International) | Clark, Peter E. (Boeing Research and Technology) | Barker, Ken (University of Texas at Austin) | Chaw, Shaw-Yi (University of Texas at Austin) | Greaves, Mark (Vulcan, Inc.) | Grosof, Benjamin (Vulcan, Inc.) | Leung, Alice (Raytheon BBN Technologies Corporation) | McDonald, David D. (Raytheon BBN Technologies Corporation) | Mishra, Sunil (SRI International) | Pacheco, John (SRI International) | Porter, Bruce (University of Texas at Austin) | Spaulding, Aaron (SRI International) | Tecuci, Dan (University of Texas at Austin) | Tien, Jing (SRI International)
In the winter, 2004 issue of AI Magazine, we reported Vulcan Inc.'s first step toward creating a question-answering system called "Digital Aristotle." The goal of that first step was to assess the state of the art in applied Knowledge Representation and Reasoning (KRR) by asking AI experts to represent 70 pages from the advanced placement (AP) chemistry syllabus and to deliver knowledge-based systems capable of answering questions from that syllabus. This paper reports the next step toward realizing a Digital Aristotle: we present the design and evaluation results for a system called AURA, which enables domain experts in physics, chemistry, and biology to author a knowledge base and that then allows a different set of users to ask novel questions against that knowledge base. These results represent a substantial advance over what we reported in 2004, both in the breadth of covered subjects and in the provision of sophisticated technologies in knowledge representation and reasoning, natural language processing, and question answering to domain experts and novice users.
Friedland, Noah S., Allen, Paul G., Matthews, Gavin, Witbrock, Michael, Baxter, David, Curtis, Jon, Shepard, Blake, Miraglia, Pierluigi, Angele, Jurgen, Staab, Steffen, Moench, Eddie, Oppermann, Henrik, Wenke, Dirk, Israel, David, Chaudhri, Vinay, Porter, Bruce, Barker, Ken, Fan, James, Chaw, Shaw Yi, Yeh, Peter, Tecuci, Dan, Clark, Peter
Vulcan selected three teams, each of which was to formally represent 70 pages from the advanced placement (AP) chemistry syllabus and deliver knowledge-based systems capable of answering questions on that syllabus. The evaluation quantified each system's coverage of the syllabus in terms of its ability to answer novel, previously unseen questions and to provide human- readable answer justifications. These justifications will play a critical role in building user trust in the question-answering capabilities of Digital Aristotle. This article presents the motivation and longterm goals of Project Halo, describes in detail the six-month first phase of the project -- the Halo Pilot -- its KR&R challenge, empirical evaluation, results, and failure analysis.