Collaborating Authors

question answering

New techniques in the field of Visual Question Answering part2(Machine Learning)


Abstract: We present a new pre-training method, Multimodal Inverse Cloze Task, for Knowledge-based Visual Question Answering about named Entities (KVQAE). KVQAE is a recently introduced task that consists in answering questions about named entities grounded in a visual context using a Knowledge Base. Therefore, the interaction between the modalities is paramount to retrieve information and must be captured with complex fusion models. As these models require a lot of training data, we design this pre-training task from existing work in textual Question Answering. It consists in considering a sentence as a pseudo-question and its context as a pseudo-relevant passage and is extended by considering images near texts in multimodal documents.

How to choose a Sentence Transformer from Hugging Face


As a quick recap, Domain largely describes the high-level notion of what the dataset is about. In addition to Domain, there are many Tasks used to produce vector embeddings. Unlike language models, in which most models use the training task of "predict the masked out token", embedding models are trained in a much broader set of ways. For example, Duplicate Question Detection might perform better with a different model than one trained with Question Answering. It is a good rule of thumb to find models that have been trained within the same domain as your use case.

Using Natural Language Question Answering (NLQA) Within Your Company


Pressing, searching, and hunting for information is a thing of the past. Until recently, employees across industries had to scroll search engines, wait on co-worker responses, and scan through company memos and files just to find the answer to a simple question using NLQA. Specific machine learning and artificial intelligence techniques allow workers to proactively understand their information with the help of natural language question answering (NLQA). NLQA understands spoken or written verbiage to provide on-the-spot question answering. Subsets of NLQA, like natural language processing (NLP) and natural language understanding (NLU), have the ability to extract tone and intent behind all sorts of text.

IBM Applied AI Professional Certificate


Kickstart your learning of Python with this beginner-friendly self-paced course taught by an expert. Python is one of the most popular languages in the programming and data science world and demand for individuals who have the ability to apply Python has never been higher. This introduction to Python course will take you from zero to programming in Python in a matter of hours--no prior programming experience necessary! You will learn about Python basics and the different data types. You will familiarize yourself with Python Data structures like List and Tuples, as well as logic concepts like conditions and branching.

Running IBM Watson NLP in Minikube


IBM Watson NLP (Natural Language Understanding) and Watson Speech containers can be run locally, on-premises or Kubernetes and OpenShift clusters. Via REST and gRCP APIs AI can easily be embedded in applications. This post describes how to run Watson NLP locally in Minikube. To set some context, check out the landing page IBM Watson NLP Library for Embed. The Watson NLP containers can be run on different container platforms, they provide REST and gRCP interfaces, they can be extended with custom models and they can easily be embedded in solutions.

Week 1-- FlashCards


Hi, we are a two-student group that will be trying to create an ML model for their AIN311 course. This is the first of the many blog posts we will publish regarding this project. Stay tuned for a new post every Sunday. As everybody knows AI changes our lives for the better day by day. As two AI Engineering students, we thought we could kill two birds with one stone and create a project for our course which could help us study better while saving us time.

WEEK #1 Question Assistant Barlas


Today is a great day because our first blog post is being published. We introduce our project in this blog post. Let's start with the name of the project. The project name is Question Assistant Barlas. We thought why not give a human name to our question-generating machine like Alan Turing's Christopher.

A Question-Answering Bot Powered by Wikipedia, Coupled to GPT-3


If you follow me, you've seen I'm fascinated with GPT-3 both as a tool for productivity and as a tool for information retrieval through natural questions. You've also seen that GPT-3 often provides correct answers to a question, but sometimes it does not and it can even be misleading or confusing because its answer appears confident despite being wrong. In some cases, but not always, when it cannot find a reasonable completion (i.e. it "doesn't know" the answer) it tells you so, or it just doesn't provide any answer. I showed you that factual accuracy can be improved by fine-tuning the model, or more easily, by few-shot learning. But it isn't easy to decide what information to use in these procedures, let alone how to apply it.

Question Answering Over Biological Knowledge Graph via Amazon Alexa


Structured and unstructured data and facts about drugs, genes, protein, viruses, and their mechanism are spread across a huge number of scientific articles. These articles are a large-scale knowledge source and can have a huge impact on disseminating knowledge about the mechanisms of certain biological processes. A knowledge graph (KG) can be constructed by integrating such facts and data and be used for data integration, exploration, and federated queries. However, exploration and querying large-scale KGs is tedious for certain groups of users due to a lack of knowledge about underlying data assets or semantic technologies. A question-answering (QA) system allows the answer of natural language questions over KGs automatically using triples contained in a KG.