"Questions are asked and answered every day. Question answering (QA) technology aims to deliver the same facility online. It goes further than the more familiar search based on keywords (as in Google, Yahoo, and other search engines), in attempting to recognize what a question expresses and to respond with an actual answer. This simplifies things for users in two ways. First, questions do not often translate into a simple list of keywords. ...Second, QA takes responsibility for providing answers, rather than a searchable list of links to potentially relevant documents (web pages), highlighted by snippets of text that show how the query matched the documents."
– from Bonnie Webber & Nick Webb. Question Answering. In The Handbook of Computational Linguistics and Natural Language Processing. Alexander Clark, Chris Fox, Shalom Lappin (Eds.). Wiley, 2010.
Meghan McCain's viral clapback against one Twitter user resurfaced Friday during an episode of "Jeopardy!" McCain shared a photo of a Jeopardy question on social media that referenced a tweet she sent in March in response to a remark made by a conservative commentator criticizing "The View." "Twitter eruputed when this co-host of'The View' responded to a critic with the tweet'you were at my wedding Denise,'" the question read. Denise McAllister, a contributor for The Federalist, a conservative website founded by McCain's husband, slammed the daytime talk show hosts in a March 25 tweet as "delusional mental midgets" that lack "emotional regulation." McCain fired back with "you were at my wedding Denise…" The six letter response set social media users into a frenzy, who used the clapback to inspire a series of memes. McAllister sent a response tweet to clarify that she was not directing her remark at McCain.
Last week it was reported by Bloomberg that our Alexa fears are true, that someone might be listening! Amazon came under scrutiny for having thousands of people around the world listening to Alexa commands to help improve the Alexa digital assistant powering its line of Echo speakers. According to Bloomberg the team listen to voice recordings captured in Echo owners' homes and offices. The recordings are transcribed, annotated and then fed back into the software to eliminate gaps in Alexa's understanding of human speech and help it better respond to commands. It is widely referenced that ComScore predicts that by 2020, voice will make up 50% of all searches, although criticisers of the trend cite the trust between the consumer and voice tech as a main barrier to growth.
We present the task of Spatio-Temporal Video Question Answering, which requires intelligent systems to simultaneously retrieve relevant moments and detect referenced visual concepts (people and objects) to answer natural language questions about videos. We first augment the TVQA dataset with 310.8k bounding boxes, linking depicted objects to visual concepts in questions and answers. We name this augmented version as TVQA+. We then propose Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both the spatial and temporal domains to answer questions about videos. Comprehensive experiments and analyses demonstrate the effectiveness of our framework and how the rich annotations in our TVQA+ dataset can contribute to the question answering task. As a side product, by performing this joint task, our model is able to produce more insightful intermediate results. Dataset and code are publicly available.
IBM Watson Health is tapering off its Drug Discovery program, which uses "AI" software to help companies develop new pharmaceuticals, blaming poor sales. IBM spokesperson Ed Barbini told The Register: "We are not discontinuing our Watson for Drug Discovery offering, and we remain committed to its continued success for our clients currently using the technology. We are focusing our resources within Watson Health to double down on the adjacent field of clinical development where we see an even greater market need for our data and AI capabilities." In other words, it appears the product won't be sold to any new customers, however, organizations that want to continue using the system will still be supported. When we pressed Big Blue's spinners to clarify this, they tried to downplay the situation using these presumably Watson neural-network-generated words: The offering is staying on the market, and we'll work with clients who want to team with IBM in this area.
Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning with multiple sentences, it remains unclear how best to utilize entailment models pre-trained on large scale datasets such as SNLI, which are based on sentence pairs. We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks. Multee uses (i) a local module that helps locate important sentences, thereby avoiding distracting information, and (ii) a global module that aggregates information by effectively incorporating importance weights. Importantly, we show that both modules can use entailment functions pre-trained on a large scale NLI datasets. We evaluate performance on MultiRC and OpenBookQA, two multihop QA datasets. When using an entailment function pre-trained on NLI datasets, Multee outperforms QA models trained only on the target QA datasets and the OpenAI transformer models. The code is available at https://github.com/StonyBrookNLP/multee.
In this paper, we address the question answering challenge with the SQuAD 2.0 dataset. We design a model architecture which leverages BERT's capability of context-aware word embeddings and BiDAF's context interactive exploration mechanism. By integrating these two state-of-the-art architectures, our system tries to extract the contextual word representation at word and character levels, for better comprehension of both question and context and their correlations. We also propose our original joint posterior probability predictor module and its associated loss functions. Our best model so far obtains F1 score of 75.842% and EM score of 72.24% on the test PCE leaderboad.
Golf fans who are planning to watch the Masters this weekend will have yet more ways to check out the action. For the first time at a golf tournament, practically every one of the more than 20,000 shots from the first major of the year will be available to view on the Masters website and app within five minutes of a player striking the ball. While these videos won't be live, you'll essentially be able to watch full rounds from the likes of Tiger Woods, Rory McIlroy and Jordan Speith without such trivial matters as watching them walk between shots. There is a caveat in that cameras might not capture shots in some instances, such as those from unusual lies, or if a group's tee shots end up in wildly different spots. The Masters attracts sports aficionados who might not typically watch golf as well as devotees, so it's a high-profile way to debut this technology after a few years of development.
As she met her fellow captains and competitors, all multiweek winners on the game show (including me), she was surprised how familiar everyone seemed to be with each other. Back in 2014, when she made her first appearance, "I didn't know a single person who had ever been on the show," Julia told me. But this time, she marveled, "everyone else seems to have known each other, either personally or by reputation, for decades." They shared years of experience on Jeopardy's secret farm team: quiz bowl. Of the 18 "All-Stars" in the tourney, all but Julia and two others had played the academic competition known as quiz bowl in high school or college.