Sun

AAAI Conferences

In question answering, answer extraction aims topin-point the exact answer from passages. However,most previous methods perform such extractionon each passage separately, without consideringclues provided in other passages. This paperpresents a novel approach to extract answers byfully leveraging connections among different passages.Specially, extraction is performed on a PassageGraph which is built by adding links uponmultiple passages. Different passages are connectedby linking words with the same stem. Weuse the factor graph as our model for answer extraction.Experimental results on multiple QA datasets demonstrate that our method significantly improvesthe performance of answer extraction.


Microsoft creates AI that can read a document and answer questions about it as well as a person - The AI Blog

#artificialintelligence

Microsoft researchers have created technology that uses artificial intelligence to read a document and answer questions about it about as well as a human. It's a major milestone in the push to have search engines such as Bing and intelligent assistants such as Cortana interact with people and provide information in more natural ways, much...


Finding Generalizable Evidence by Learning to Convince Q&A Models

arXiv.org Artificial Intelligence

We plot the judge's probability of the target answer given that sentence against how often humans also select that target answer given that same sentence. Humans tend to find a sentence to be strong evidence for an answer when the judge model finds it to be strong evidence. Strong evidence to a model tends to be strong evidence to humans as shown in Figure 7. Combined with the previous result, we can see that learned agents are more accurate at predicting sentences that humans find to be strong evidence. F Model Evaluation of Evidence on DREAM Figure 8 shows how convincing various judge models find each evidence agent. Our findings on DREAM are similar to those from RACE in §4.2. Figure 8: On DREAM, how often each judge selects an agent's answer when given a single agent-chosen sentence. The black line divides learned agents (right) and search agents (left), with human evidence selection in the leftmost column. All agents find evidence that convinces judge models more often than a no-evidence baseline (33%). Learned agents predicting p ( i) or p ( i) find the most broadly convincing evidence.



Tan

AAAI Conferences

In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not necessary in the passages. We therefore develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to predict the most important sub-spans from the passage as evidence, and the answer synthesis model takes the evidence as additional features along with the question and passage to further elaborate the final answers. We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages. The answer synthesis model is based on the sequence-to-sequence neural networks with extracted evidences as features. Experiments show that our extraction-then-synthesis method outperforms state-of-the-art methods.