"Questions are asked and answered every day. Question answering (QA) technology aims to deliver the same facility online. It goes further than the more familiar search based on keywords (as in Google, Yahoo, and other search engines), in attempting to recognize what a question expresses and to respond with an actual answer. This simplifies things for users in two ways. First, questions do not often translate into a simple list of keywords. ...Second, QA takes responsibility for providing answers, rather than a searchable list of links to potentially relevant documents (web pages), highlighted by snippets of text that show how the query matched the documents."
– from Bonnie Webber & Nick Webb. Question Answering. In The Handbook of Computational Linguistics and Natural Language Processing. Alexander Clark, Chris Fox, Shalom Lappin (Eds.). Wiley, 2010.
Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson's intricacies and importance. Well, it's been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn't happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did. During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky's skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.
Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution. To address this issue, we introduce a self-critical training objective that ensures that visual explanations of correct answers match the most influential image regions more than other competitive answer candidates. The influential regions are either determined from human visual/textual explanations or automatically from just significant words in the question and answer. We evaluate our approach on the VQA generalization task using the VQA-CP dataset, achieving a new state-of-the-art i.e. 49.5\% using textual explanations and 48.5\% using automatically Papers published at the Neural Information Processing Systems Conference.
Visual Question Answering (VQA) is the task of answering questions about an image. Some VQA models often exploit unimodal biases to provide the correct answer without using the image information. As a result, they suffer from a huge drop in performance when evaluated on data outside their training set distribution. This critical issue makes them unsuitable for real-world settings. We propose RUBi, a new learning strategy to reduce biases in any VQA model.
Many financial institutions are rapidly developing and adopting AI models. They're using the models to achieve new competitive advantages such as being able to make faster and more successful underwriting decisions. However, AI models introduce new risks. In a previous post, I describe why AI models increase risk exposure compared to the more traditional, rule-based models that have been in use for decades. In short, if AI models have been trained on biased data, lack explainability, or perform inadequately, they can expose organizations to as much as seven-figure losses or fines.
Google introduced an open-source library for the rapid prototyping of quantum ML models! In order to understand quantum models, you need to familiarize yourself with two concepts: quantum data and hybrid quantum-classical models (current approach). Quantum Data: (which can be generated) can be used for the simulation of chemicals and quantum matter, quantum control, quantum communication networks, quantum metrology, and much more. Hybrid quantum-classical models: OK spoiler alert, these quantum models are not YET using quantum powered hardware (still too noisy), so we are left with using GPUs. So that's why they are "hybrid".
IBM recently announced several new Watson technologies designed to help organizations identify, understand, and analyze some of the most challenging aspects of the English language with greater clarity and insights. These new features are considered the first commercialization of key Natural Language Processing (NLP) capabilities to come from IBM Research's Project Debater. There is a new advanced sentiment analysis feature defined to identify and analyze idioms and colloquialisms for the first time. So it can recognize phrases such as "hardly helpful" or "hot under the collar." Phrases like those have been challenging for artificial intelligence systems since they are difficult for algorithms to spot.
In previous posts we explored what analysts want to discover about their virtual assistant and some building blocks for building analytics. In this post I will demonstrate some common recipes tailored to Watson Assistant logs. First we extract raw log events and store on the file system. This requires the apikey and URL for your skill. For a single-skill assistant you will also need the workspace ID (extractable from the "Legacy v1 Workspace URL"), for a multi-skill assistant there are other IDs you can use to filter on (described in the Watson Assistant list log events API).
Artificial intelligence researchers at IBM have introduced a major upgrade to the famed Watson computer, allowing it to understand idioms and colloquialisms for the first time. IBM says the update makes it the first commercial AI system capable of identifying, understanding and analysing some of the most challenging aspects of the English language. Phrases like "hardly helpful" and "hot under the collar" are tricky for algorithms to spot, meaning AI is unable to debate complex topics or have nuanced conversations with humans. "Language is a tool for expressing thought and opinion, as much as it is a tool for information," said Rob Thomas, a general manager at IBM Data and AI. "This is why we believe that advancing our ability to capture, analyse, and understand more from language with NLP will help transform how businesses utilise their intellectual capital that is codified in data."
IBM is announcing several new IBM Watson technologies designed to help organizations begin identifying, understanding and analyzing some of the most challenging aspects of the English language with greater clarity, for greater insights. The new technologies represent the first commercialization of key Natural Language Processing (NLP) capabilities to come from IBM Research's Project Debater, the only AI system capable of debating humans on complex topics. For example, a new advanced sentiment analysis feature is defined to identify and analyze idioms and colloquialisms for the first time. Phrases, like'hardly helpful,' or'hot under the collar,' have been challenging for AI systems because they are difficult for algorithms to spot. With advanced sentiment analysis, businesses can begin analyzing such language data with Watson APIs for a more holistic understanding of their operation.