Collaborating Authors


Jeopardy champion's 23-day winning streak ends after losing by $1

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Mattea Roach, a tutor from Toronto, Canada, had won $560,983 over the course of her winning streak. This image released by Sony Pictures Television shows Mattea Roach, a 23-year-old Canadian contestant on the game show "Jeopardy!" Heading into the final round of Friday's match, Roach was leading with $19,200 and wagered $3,001 on the Final Jeopardy question.

Artifical Intelligence Has Revolutionized Our Life Over The Past Decades:


Artificial Intelligence refers to the ability of any machine or computer to mimic human capabilities such as recognizing objects,making decisions, and solving problems,etc. The past decade has witnessed the great rise of Artificial Intelligence. The technology has made an impact in almost every field out there. The two major reasons for the rapid growth of AI in this decade are: data and compute. IBM Watson, a natural language question-answering computer, competes on Jeopardy and defeats two former champions.. Watson is a significant leap of a machine's ability to understand the context in human language.

More Play and Less Prep: Flamel.AI Automates Role-Playing Games with IBM Watson


Alex Migitko started playing tabletop role-playing games (RPGs) 15 years ago. But as life got more demanding, he couldn't commit to the time needed for preparation and play, both as a game facilitator and player. Though passionate about gaming, he ultimately stopped. These "aging out" stories are all too common. Players fall in love with gaming because it provides such depth and breadth of creativity and escape.

End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering Artificial Intelligence

We present an end-to-end differentiable training method for retrieval-augmented open-domain question answering systems that combine information from multiple retrieved documents when generating answers. We model retrieval decisions as latent variables over sets of relevant documents. Since marginalizing over sets of retrieved documents is computationally hard, we approximate this using an expectation-maximization algorithm. We iteratively estimate the value of our latent variable (the set of relevant documents for a given question) and then use this estimate to update the retriever and reader parameters. We hypothesize that such end-to-end training allows training signals to flow to the reader and then to the retriever better than staged-wise training. This results in a retriever that is able to select more relevant documents for a question and a reader that is trained on more accurate documents to generate an answer. Experiments on three benchmark datasets demonstrate that our proposed method outperforms all existing approaches of comparable size by 2-3% absolute exact match points, achieving new state-of-the-art results. Our results also demonstrate the feasibility of learning to retrieve to improve answer generation without explicit supervision of retrieval decisions.

NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned Artificial Intelligence

We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing large, redundant, retrieval corpora or the parameters of large learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.

How Artificial Intelligence Is Reshaping the Insurance Industry


In 1997, IBM's Deep Blue earned itself the title of becoming the first computer in the world to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. More than a decade later, in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" Intelligent machines had arrived and finally given rest to the chatter around artificial intelligence (AI) that started in Dartmouth College, USA in 1956. There was no more disputing the transformative value of AI and the role it could play in helping businesses create customised products and engage with their clients more effectively. AI systems basically help perform tasks such that if the same task was to be carried out by humans, it would take decidedly longer.

Seeing how computers 'think' helps humans stump machines and reveals AI weaknesses


Researchers from the University of Maryland have figured out how to reliably create such questions through a human-computer collaboration, developing a dataset of more than 1,200 questions that, while easy for people to answer, stump the best computer answering systems today. The system that learns to master these questions will have a better understanding of language than any system currently in existence. The work is described in an article published in the 2019 issue of the journal Transactions of the Association for Computational Linguistics. "Most question-answering computer systems don't explain why they answer the way they do, but our work helps us see what computers actually understand," said Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the paper. "In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do."

Learning Representations and Agents for Information Retrieval Artificial Intelligence

A goal shared by artificial intelligence and information retrieval is to create an oracle, that is, a machine that can answer our questions, no matter how difficult they are. A more limited, but still instrumental, version of this oracle is a question-answering system, in which an open-ended question is given to the machine, and an answer is produced based on the knowledge it has access to. Such systems already exist and are increasingly capable of answering complicated questions. This progress can be partially attributed to the recent success of machine learning and to the efficient methods for storing and retrieving information, most notably through web search engines. One can imagine that this general-purpose question-answering system can be built as a billion-parameters neural network trained end-to-end with a large number of pairs of questions and answers. We argue, however, that although this approach has been very successful for tasks such as machine translation, storing the world's knowledge as parameters of a learning machine can be very hard. A more efficient way is to train an artificial agent on how to use an external retrieval system to collect relevant information. This agent can leverage the effort that has been put into designing and running efficient storage and retrieval systems by learning how to best utilize them to accomplish a task. ...

The Secret Farm Team for em Jeopardy! /em Players


As she met her fellow captains and competitors, all multiweek winners on the game show (including me), she was surprised how familiar everyone seemed to be with each other. Back in 2014, when she made her first appearance, "I didn't know a single person who had ever been on the show," Julia told me. But this time, she marveled, "everyone else seems to have known each other, either personally or by reputation, for decades." They shared years of experience on Jeopardy's secret farm team: quiz bowl. Of the 18 "All-Stars" in the tourney, all but Julia and two others had played the academic competition known as quiz bowl in high school or college.

Watson – Time to Prune the ML Tree?


Summary: IBM's Watson QAM (Question Answering Machine), famous for its 2011 Jeopardy win was supposed to bring huge payoffs in healthcare. Instead both IBM and its Watson Healthcare customers are rapidly paring back these projects that have largely failed to pay off. Watson was the first big out-of-the-box commercial application in ML/AI. I'm sure I'm leaving out many other notable firsts that IBM has scored but since it's Watson we want to talk about, we'll stop there. The remarkable thing about Watson is that in 2011 the other skills that we think of as AI, image and video processing, facial recognition, text and speech processing, game play beyond chess, autonomous vehicles, all these were so primitive they were not yet close to commercial acceptance and wouldn't be for several more years.