Goto

Collaborating Authors

Question Answering


GLTR from MIT-IBM Watson AI Lab and HarvardNLP

#artificialintelligence

Obviously, GLTR is not perfect. Its main limitation is its limited scale. It won't be able to automatically detect large-scale abuse, only individual cases. Moreover, it requires at least an advanced knowledge of the language to know whether an uncommon word does make sense at a position. Our assumption is also limited in that it assumes a simple sampling scheme.


What Do You Need to Know to Use a Search Engine? Why We Still Need to Teach Research Skills

AI Magazine

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet.


IBM Teams Up With Ad Council for AI-Powered Program

#artificialintelligence

Randi Stipes, CMO at IBM Watson Advertising, explained that "Call for Creative" is IBM's commitment "to help the advertising industry reemerge stronger from Covid-19." Through this initiative, the tech company ultimately wants to demonstrate how artificial intelligence can drive positive change when used in a purposeful way, geared toward helping the ad industry get back on its feet after the detrimental effects of Covid-19. IBM had debuted the award-winning Advertising Accelerator tools with Watson earlier this year and gave access to the Ad Council, which it is partnering with for this project. The Accelerator harnesses AI to "continuously learn and predict the optimal combination of creative elements to help brands deploy more effective digital campaigns based on key signals like consumer reaction, weather and time of day," a statement from the company said. Brands that leveraged Accelerator experienced a 25% increase in performance throughout a campaign along with a 10% lift in site visits after one week, the statement continued.


Inquire Biology: A Textbook that Answers Questions

AI Magazine

Inquire Biology is a prototype of a new kind of intelligent textbook -- one that answers students' questions, engages their interest, and improves their understanding. Inquire Biology provides unique capabilities via a knowledge representation that captures conceptual knowledge from the textbook and uses inference procedures to answer students' questions. Students ask questions by typing free-form natural language queries or by selecting passages of text. The system then attempts to answer the question and also generates suggested questions related to the query or selection. The questions supported by the system were chosen to be educationally useful, for example: what is the structure of X? compare X and Y? how does X relate to Y? In user studies, students found this question-answering capability to be extremely useful while reading and while doing problem solving.


How artificial intelligence is transforming the future of digital marketing

#artificialintelligence

Digital marketing relies on leveraging insights from the copious amounts of data that gets created every time a customer interacts with a digital asset. In 2020, we anticipate a significant uptick in the mainstreaming of AI and machine learning use cases in digital marketing across several areas. In the past year, online search has had several AI and machine learning developments. Google is leading the pack with exciting applications in information retrieval. For example, Google's BERT technology can process a word in the context of all the other terms in a sentence, rather than one-by-one in order.


How Artificial Intelligence Is Reshaping the Insurance Industry

#artificialintelligence

In 1997, IBM's Deep Blue earned itself the title of becoming the first computer in the world to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. More than a decade later, in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" Intelligent machines had arrived and finally given rest to the chatter around artificial intelligence (AI) that started in Dartmouth College, USA in 1956. There was no more disputing the transformative value of AI and the role it could play in helping businesses create customised products and engage with their clients more effectively. AI systems basically help perform tasks such that if the same task was to be carried out by humans, it would take decidedly longer.


Challenge Closed-book Science Exam: A Meta-learning Based Question Answering System

arXiv.org Artificial Intelligence

Prior work in standardized science exams requires support from large text corpus, such as targeted science corpus from Wikipedia or SimpleWikipedia. However, retrieving knowledge from the large corpus is time-consuming and questions embedded in complex semantic representation may interfere with retrieval. Inspired by the dual process theory in cognitive science, we propose a MetaQA framework, where system 1 is an intuitive meta-classifier and system 2 is a reasoning module. Specifically, our method based on meta-learning method and large language model BERT, which can efficiently solve science problems by learning from related example questions without relying on external knowledge bases. We evaluate our method on AI2 Reasoning Challenge (ARC), and the experimental results show that meta-classifier yields considerable classification performance on emerging question types. The information provided by meta-classifier significantly improves the accuracy of reasoning module from 46.6% to 64.2%, which has a competitive advantage over retrieval-based QA methods.


Event-QA: A Dataset for Event-Centric Question Answering over Knowledge Graphs

arXiv.org Artificial Intelligence

Semantic Question Answering (QA) is the key technology to facilitate intuitive user access to semantic information stored in knowledge graphs. Whereas most of the existing QA systems and datasets focus on entity-centric questions, very little is known about the performance of these systems in the context of events. As new event-centric knowledge graphs emerge, datasets for such questions gain importance. In this paper we present the Event-QA dataset for answering event-centric questions over knowledge graphs. Event-QA contains 1000 semantic queries and the corresponding English, German and Portuguese verbalisations for EventKG - a recently proposed event-centric knowledge graph with over 970 thousand events.


Rapidly Bootstrapping a Question Answering Dataset for COVID-19

arXiv.org Artificial Intelligence

We present CovidQA, the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge. To our knowledge, this is the first publicly available resource of its type, and intended as a stopgap measure for guiding research until more substantial evaluation resources become available. While this dataset, comprising 124 question-article pairs as of the present version 0.1 release, does not have sufficient examples for supervised machine learning, we believe that it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19. This paper describes our methodology for constructing the dataset and presents the effectiveness of a number of baselines, including term-based techniques and various transformer-based models. The dataset is available at http://covidqa.ai/


HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data

arXiv.org Artificial Intelligence

Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information might lead to severe coverage problems. To fill in the gap, we present \dataset, a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a structured Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e. lack of either form would render the question unanswerable. We test with three different models: 1) table-only model. 2) text-only model. 3) a hybrid model \model which combines both table and textual information to build a reasoning path towards the answer. The experimental results show that the first two baselines obtain compromised scores below 20\%, while \model significantly boosts EM score to over 50\%, which proves the necessity to aggregate both structure and unstructured information in \dataset. However, \model's score is still far behind human performance, hence we believe \dataset to an ideal and challenging benchmark to study question answering under heterogeneous information. The dataset and code are available at \url{https://github.com/wenhuchen/HybridQA}.