"Questions are asked and answered every day. Question answering (QA) technology aims to deliver the same facility online. It goes further than the more familiar search based on keywords (as in Google, Yahoo, and other search engines), in attempting to recognize what a question expresses and to respond with an actual answer. This simplifies things for users in two ways. First, questions do not often translate into a simple list of keywords. ...Second, QA takes responsibility for providing answers, rather than a searchable list of links to potentially relevant documents (web pages), highlighted by snippets of text that show how the query matched the documents."
– from Bonnie Webber & Nick Webb. Question Answering. In The Handbook of Computational Linguistics and Natural Language Processing. Alexander Clark, Chris Fox, Shalom Lappin (Eds.). Wiley, 2010.
State Farm is moving forward with several digital initiatives as the largest personal lines P&C insurer in the U.S. by market share rides the digitalization wave shaking up the industry. The company has launched a $100 million fund, State Farm Ventures, with the goal of increasing its involvement in and adoption of insurtech. Led by innovation executive Michael Remmes, the unit will focus on "acquiring startups or strategic alliances that support our core products," says spokesperson Angie Harrier. With a major thrust of insurtech being use cases for artificial intelligence, State Farm is beginning to explore that technology as well. The insurer is running an ad campaign along with the Weather Company and IBM Watson through Halloween that uses Watson's cognitive computing technology to deliver relevant storm-preparation content to affected customers.
Las Vegas, HR Technology Conference & Expo #HRTech -- LEADx, Inc., the world's leading Conversational Learning (CL) platform for leadership enablement, today launched LEADx Coach Amanda, an executive coach virtual assistant powered by IBM Watson Assistant. "We believe every manager deserves a coach," said Kevin Kruse, LEADx founder and CEO. "Traditional leadership development, based on workshops and online tutorials, has long failed enterprises and managers alike. Executive coaches work well, but due to their cost they are ironically reserved for the leaders who have the most experience. But now, we've tapped the power of AI to democratize leadership development."
There are three modalities in the reading comprehension setting: question, answer and context. The task of question answering or question generation aims to infer an answer or a question when given the counterpart based on context. We present a novel two-way neural sequence transduction model that connects three modalities, allowing it to learn two tasks simultaneously and mutually benefit one another. During training, the model receives question-context-answer triplets as input and captures the cross-modal interaction via a hierarchical attention process. Unlike previous joint learning paradigms that leverage the duality of question generation and question answering at data level, we solve such dual tasks at the architecture level by mirroring the network structure and partially sharing components at different layers. This enables the knowledge to be transferred from one task to another, helping the model to find a general representation for each modality. The evaluation on four public datasets shows that our dual-learning model outperforms the mono-learning counterpart as well as the state-of-the-art joint models on both question answering and question generation tasks.
Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a large-scale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.
The component parts of a successful search engine optimization (SEO) strategy may have remained relatively constant, but their definition and purpose have changed entirely. Driven by trends like visual search and voice search, the industry's scope has expanded and evolved into something more dynamic. This delivers on a genuine consumer need. According to a report from Slyce.it, 74 percent of shoppers report that text-only search is insufficient for finding the products they want. It is unsurprising that Gartner research predicts that by 2021, early adopter brands that redesign their websites to support visual and voice search will increase digital commerce revenue by as much as 30 percent.
What if artificial intelligence can't cure cancer after all? That's the message of a big Wall Street Journal post-mortem on Watson, the IBM project that was supposed to turn IBM's computing prowess into a scalable program that could deliver state-of-the-art personalized cancer treatment protocols to millions of patients around the world. Watson in general, and its oncology application in particular, has been receiving a lot of skeptical coverage of late; STAT published a major investigation last year, reporting that Watson was nowhere near being able to live up to IBM's promises. After that article came out, the IBM hype machine started toning things down a bit. But while a lot of the problems with Watson are medical or technical, they're deeply financial, too.
Business leaders understand the advantage of using the power of artificial intelligence and machine learning to stay ahead of their competitors. However, understanding the power of AI is a lot different than actually successfully implementing it in companies. For example, in 2017, Gartner estimated that Big Data projects have a success rate of only 15%. While organizational factors may be a primary reason for this poor success rate, another reason for such a high failure rate could be due to a lack of AI / Machine Learning talent needed to successfully pursue these types of projects. Specifically, it's been shown that there is a lack of advanced machine learning talent among data professionals; less than 20% of surveyed data professionals said they were competent in such areas as Natural Language Processing (19%), Recommendation Engines (14%), Reinforcement Learning (6%), Adversarial Learning (4%) and Neural Networks – RNNs (15%).
I've been using Jupyter Notebooks with great delight for many years now, mostly with Python, and it's validating to see that their popularity keeps growing, both in academia and the industry. I do have a pet peeve though, which is the lack of a first-class visual debugger similar to these available in other IDEs like Eclipse, IntelliJ, or Visual Studio Code. Some would rightfully point out that Jupyter already supports pdb for simple debugging, where you can manually and sequentially enter commands to do things like inspect variables, set breakpoints, etc. -- and this is probably sufficient when it comes to debugging simple analytics. To raise the bar, the PixieDust team is happy to introduce the first (to the best of our knowledge) visual Python debugger for Jupyter Notebooks. As advertised, the PixieDebugger is a visual Python debugger built as a PixieApp, and includes a source editor, local variable inspector, console output, the ability to evaluate Python expressions in the current context, breakpoints management, and a toolbar for controlling code execution.
You've got to love the idea of IBM Watson. The super-computer using advanced AI to learn everything, faster and better than any human being could ever hope to do. The hope is it would help us solve some of our most pressing problems. One of IBM's (IBM) high-profile challenges was their desire to cure cancer. Unfortunately, it has not happened.