Goto

Collaborating Authors

Results


Picking an explainability technique

#artificialintelligence

ML Model Explainability (sometimes referred to as Model Interpretability or ML Model Transparency) is a fundamental pillar of AI Quality. It is impossible to trust a machine learning model without understanding how and why it makes its decisions, and whether these decisions are justified. Peering into ML models is absolutely necessary before deploying them in the wild, where a poorly understood model can not only fail to achieve its objective, but also cause negative business or social impacts, or encounter regulatory trouble. Explainability is also an important backbone to other trustworthy ML pillars like fairness and stability. Yet "explainability" is often a broad and sometimes confusing concept.


Neuro-Symbolic Forward Reasoning

#artificialintelligence

Reasoning is an essential part of human intelligence and thus has been a long-standing goal in artificial intelligence research. With the recent success of deep learning, incorporating reasoning with deep learning systems, i.e., neuro-symbolic AI has become a major field of interest. We propose the Neuro-Symbolic Forward Reasoner (NSFR), a new approach for reasoning tasks taking advantage of differentiable forward-chaining using first-order logic. The key idea is to combine differentiable forward-chaining reasoning with object-centric (deep) learning. Differentiable forward-chaining reasoning computes logical entailments smoothly, i.e., it deduces new facts from given facts and rules in a differentiable manner.


The Evolution of Tokenization – Byte Pair Encoding in NLP - KDnuggets

#artificialintelligence

NLP may have been a little late to the AI epiphany but it is doing wonders with organisations like Google, OpenAI releasing state-of-the-art(SOTA) language models like BERT and GPT-2/3 respectively. GitHub Copilot and OpenAI codex are among a few very popular applications that are in the news. As someone who has very limited exposure to NLP, I decided to take up NLP as an area of research and the next few blogs/videos will be me sharing what I learn after dissecting some important components of NLP. Top Deep Learning models like BERT, GPT-2, or GPT-3 all share the same components but with different architectures that distinguish one model from another. In this newsletter(and notebook), we are going to focus on the basics of the first component of an NLP pipeline which is tokenization.


The AI Project Cycle

#artificialintelligence

The AI Project Cycle is a cycle/order of an AI Project which defines every step an organization must take to harness/get value (Monetary or others) from that AI Project to get more ROI (Return on Investment). You might have seen AI Project Cycle images Starting from'Problem Scoping', ignoring'Problem Identification', But in this article we will discuss about the one with'Problem Identification' which is a more accurate representation. In Today's Article, we will discuss the various stages of the AI Project Cycle, starting with Problem Identification, followed by Problem Scoping, Data Acquisition, Data Exploration, Data Modelling, Evaluation and finally Deployment. You may think that the Tip of the Iceberg is the problem, but in most cases, it's not. In many cases, the problems are not obvious, the problem may look small, but digging deep and down into the problem, we will realize that the problem has a lot to it, and that the beginning is nothing.


How AI Can Lead to Better Business Management

#artificialintelligence

AI for business is an incredibly helpful tool for enterprises when used correctly. Just take a look at some numbers recently published in a Forbes Magazine article: 38% of 235 enterprises the NBRI looked at are already using AI for a variety of tasks; and more importantly, 62% of these enterprises expect to be using AI by 2018. But here's the rub: AI is a massively broad catch all term. Over the last few years, people have termed all sorts of machine coding techniques as'AI;' in fact, saying that your business uses AI is kind of like saying your garden has plants. In other words, AI is an umbrella for a whole host of technologies.


What is Hybrid Natural Language Understanding?

#artificialintelligence

We find it in everything from emails to videos to business documents and beyond. However, as pervasive as language data is to the enterprise, organizations struggle to maximize its value. Not only is there an incredible amount of language data available to and contained within organizations, but an exponentially increasing volume of it, as well. There is no ignoring the importance of language to the enterprise ecosystem. Organizations are listening, as 42% have already adopted natural language processing (NLP) systems while 26% plan to within the next year, according to IBM's Global AI Adoption Index 2021.


Predictive Maintenance: Machine Learning vs Rule Based Algorithms

#artificialintelligence

While basic predictive maintenance concepts are discussed in various articles, there is actually little to find when it comes to selecting the best approach on predicting an error. In this article we get you started with a short introduction on predictive maintenance and then focus on which way to go when it comes to choosing the best predictive algorithm for you: Is it better to go with a machine learning model or should you get started with a rule based algorithm first? Let's get started by understanding where we are coming from and what it is all about, we need some context: Predictive Maintenance is basically as old as it gets and in its foundation nothing new. If in the past a mechanics was servicing a machine and found unusual visual or acoustic behaviour in a certain part, the machine may be shut down before breaking and the part was exchanged. That is already predictive maintenance.


Operationalizing machine learning in processes

#artificialintelligence

As organizations look to modernize and optimize processes, machine learning (ML) is an increasingly powerful tool to drive automation. Unlike basic, rule-based automation--which is typically used for standardized, predictable processes--ML can handle more complex processes and learn over time, leading to greater improvements in accuracy and efficiency. But a lot of companies are stuck in the pilot stage; they may have developed a few discrete use cases, but they struggle to apply ML more broadly or take advantage of its most advanced forms. A recent McKinsey Global Survey, for example, found that only about 15 percent of respondents have successfully scaled automation across multiple parts of the business. And only 36 percent of respondents said that ML algorithms had been deployed beyond the pilot stage.


Every time I fire a conversational designer, the performance of the dialog system goes down

arXiv.org Artificial Intelligence

Incorporating explicit domain knowledge into neural-based task-oriented dialogue systems is an effective way to reduce the need of large sets of annotated dialogues. In this paper, we investigate how the use of explicit domain knowledge of conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where explicit knowledge is coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently skilled conversational designers. We experimented with the Restaurant topic of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need of annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system.


The FP Growth algorithm

#artificialintelligence

In this article, you will discover the FP Growth algorithm. It is one of the state-of-the-art algorithms for frequent itemset mining (also called Association Rule Mining) and basket analysis. Let's start with an introduction to Frequent Itemset Mining and Basket Analysis. Basket Analysis is the study of baskets in shopping. This can be online or offline shopping, as long as you can obtain data that tracks the products for each transaction.