Goto

Collaborating Authors

Rule-Based Reasoning


The Evolution of Tokenization – Byte Pair Encoding in NLP - KDnuggets

#artificialintelligence

NLP may have been a little late to the AI epiphany but it is doing wonders with organisations like Google, OpenAI releasing state-of-the-art(SOTA) language models like BERT and GPT-2/3 respectively. GitHub Copilot and OpenAI codex are among a few very popular applications that are in the news. As someone who has very limited exposure to NLP, I decided to take up NLP as an area of research and the next few blogs/videos will be me sharing what I learn after dissecting some important components of NLP. Top Deep Learning models like BERT, GPT-2, or GPT-3 all share the same components but with different architectures that distinguish one model from another. In this newsletter(and notebook), we are going to focus on the basics of the first component of an NLP pipeline which is tokenization.


The AI Project Cycle

#artificialintelligence

The AI Project Cycle is a cycle/order of an AI Project which defines every step an organization must take to harness/get value (Monetary or others) from that AI Project to get more ROI (Return on Investment). You might have seen AI Project Cycle images Starting from'Problem Scoping', ignoring'Problem Identification', But in this article we will discuss about the one with'Problem Identification' which is a more accurate representation. In Today's Article, we will discuss the various stages of the AI Project Cycle, starting with Problem Identification, followed by Problem Scoping, Data Acquisition, Data Exploration, Data Modelling, Evaluation and finally Deployment. You may think that the Tip of the Iceberg is the problem, but in most cases, it's not. In many cases, the problems are not obvious, the problem may look small, but digging deep and down into the problem, we will realize that the problem has a lot to it, and that the beginning is nothing.


How AI Can Lead to Better Business Management

#artificialintelligence

AI for business is an incredibly helpful tool for enterprises when used correctly. Just take a look at some numbers recently published in a Forbes Magazine article: 38% of 235 enterprises the NBRI looked at are already using AI for a variety of tasks; and more importantly, 62% of these enterprises expect to be using AI by 2018. But here's the rub: AI is a massively broad catch all term. Over the last few years, people have termed all sorts of machine coding techniques as'AI;' in fact, saying that your business uses AI is kind of like saying your garden has plants. In other words, AI is an umbrella for a whole host of technologies.


What is Hybrid Natural Language Understanding?

#artificialintelligence

We find it in everything from emails to videos to business documents and beyond. However, as pervasive as language data is to the enterprise, organizations struggle to maximize its value. Not only is there an incredible amount of language data available to and contained within organizations, but an exponentially increasing volume of it, as well. There is no ignoring the importance of language to the enterprise ecosystem. Organizations are listening, as 42% have already adopted natural language processing (NLP) systems while 26% plan to within the next year, according to IBM's Global AI Adoption Index 2021.


The 4 Trends That Prevail on the Gartner Hype Cycle for AI, 2021

#artificialintelligence

For the majority of organizations, continuously delivering and integrating AI solutions within enterprise applications and business workflows is a complex afterthought. On average, it takes about eight months to get an AI-based model integrated within a business workflow and for it to deliver tangible value. However, to reduce AI project failures, organizations must efficiently operationalize their AI architectures. Gartner expects that by 2025, 70% of organizations will have operationalized AI architectures due to the rapid maturity of AI orchestration initiatives. Organizations should consider model operationalization (ModelOps) for operationalizing AI solutions.


Predictive Maintenance: Machine Learning vs Rule Based Algorithms

#artificialintelligence

While basic predictive maintenance concepts are discussed in various articles, there is actually little to find when it comes to selecting the best approach on predicting an error. In this article we get you started with a short introduction on predictive maintenance and then focus on which way to go when it comes to choosing the best predictive algorithm for you: Is it better to go with a machine learning model or should you get started with a rule based algorithm first? Let's get started by understanding where we are coming from and what it is all about, we need some context: Predictive Maintenance is basically as old as it gets and in its foundation nothing new. If in the past a mechanics was servicing a machine and found unusual visual or acoustic behaviour in a certain part, the machine may be shut down before breaking and the part was exchanged. That is already predictive maintenance.


Operationalizing machine learning in processes

#artificialintelligence

As organizations look to modernize and optimize processes, machine learning (ML) is an increasingly powerful tool to drive automation. Unlike basic, rule-based automation--which is typically used for standardized, predictable processes--ML can handle more complex processes and learn over time, leading to greater improvements in accuracy and efficiency. But a lot of companies are stuck in the pilot stage; they may have developed a few discrete use cases, but they struggle to apply ML more broadly or take advantage of its most advanced forms. A recent McKinsey Global Survey, for example, found that only about 15 percent of respondents have successfully scaled automation across multiple parts of the business. And only 36 percent of respondents said that ML algorithms had been deployed beyond the pilot stage.


Every time I fire a conversational designer, the performance of the dialog system goes down

arXiv.org Artificial Intelligence

Incorporating explicit domain knowledge into neural-based task-oriented dialogue systems is an effective way to reduce the need of large sets of annotated dialogues. In this paper, we investigate how the use of explicit domain knowledge of conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where explicit knowledge is coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently skilled conversational designers. We experimented with the Restaurant topic of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need of annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system.


The FP Growth algorithm

#artificialintelligence

In this article, you will discover the FP Growth algorithm. It is one of the state-of-the-art algorithms for frequent itemset mining (also called Association Rule Mining) and basket analysis. Let's start with an introduction to Frequent Itemset Mining and Basket Analysis. Basket Analysis is the study of baskets in shopping. This can be online or offline shopping, as long as you can obtain data that tracks the products for each transaction.


RuleBert: Teaching Soft Rules to Pre-trained Language Models

arXiv.org Artificial Intelligence

While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available in the form of approximate (soft) logical rules, it is not clear how to transfer it to a PLM in order to improve its performance for deductive reasoning tasks. Here, we aim to bridge this gap by teaching PLMs how to reason with soft Horn rules. We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis. We release the first dataset for this task, and we propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training. Moreover, we demonstrate that logical notions expressed by the rules are transferred to the fine-tuned model, yielding state-of-the-art results on external datasets.