Goto

Collaborating Authors

What is inductive bias?

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. In the realm of machine learning and artificial intelligence, there are many biases like selection bias, overgeneralization bias, sampling bias, etc.


Induction, Inductive Biases, and Infusing Knowledge into Learned Representations

#artificialintelligence

Note: This post is a modified excerpt from the introduction to my PhD thesis. Our goal in building machine learning systems is, with rare exceptions, to create algorithms whose utility extends beyond the dataset in which they are trained. In other words, we desire intelligent systems that are capable of generalizing to future data. The process of leveraging observations to draw inferences about the unobserved is the principle of inductionTerminological note: In a non-technical setting, the term inductive – denoting the inference of general laws from particular instances – is typically contrasted with the adjective deductive, which denotes the inference of particular instances from general laws. This broad definition of induction may be used in machine learning to describe, for example, the model fitting process as the inductive step and the deployment on new data as the deductive step. By the same token, some AI methods such as automated theorem provers are described as deductive.


What "no free lunch" really means in machine learning

#artificialintelligence

You don't have to cook or spend any of your hard-earned money. The truth is unless if you count special talks and lectures in graduate school that promise free pizza, there is no free lunch in machine learning. The "no free lunch" (NFL) theorem for supervised machine learning is a theorem that essentially implies that no single machine learning algorithm is universally the best-performing algorithm for all problems. This is a concept that I explored in my previous article about the limitations of XGBoost, an algorithm that has gained immense popularity over the last five years due to its performance in academic studies and machine learning competitions. The goal of this article is to take this often misunderstood theorem and explain it so that you can appreciate the theory behind this theorem and understand the practical implications that it has on your work as a machine learning practitioner or data scientist.


conformalClassification: A Conformal Prediction R Package for Classification

arXiv.org Machine Learning

The conformalClassification package implements Transductive Conformal Prediction (TCP) and Inductive Conformal Prediction (ICP) for classification problems. Conformal Prediction (CP) is a framework that complements the predictions of machine learning algorithms with reliable measures of confidence. TCP gives results with higher validity than ICP, however ICP is computationally faster than TCP. The package conformalClassification is built upon the random forest method, where votes of the random forest for each class are considered as the conformity scores for each data point. Although the main aim of the conformalClassification package is to generate CP errors (p-values) for classification problems, the package also implements various diagnostic measures such as deviation from validity, error rate, efficiency, observed fuzziness and calibration plots. In future releases, we plan to extend the package to use other machine learning algorithms, (e.g. support vector machines) for model fitting.


Abduction and Argumentation for Explainable Machine Learning: A Position Survey

arXiv.org Artificial Intelligence

This paper presents Abduction and Argumentation as two principled forms for reasoning, and fleshes out the fundamental role that they can play within Machine Learning. It reviews the state-of-the-art work over the past few decades on the link of these two reasoning forms with machine learning work, and from this it elaborates on how the explanation-generating role of Abduction and Argumentation makes them naturally-fitting mechanisms for the development of Explainable Machine Learning and AI systems. Abduction contributes towards this goal by facilitating learning through the transformation, preparation, and homogenization of data. Argumentation, as a conservative extension of classical deductive reasoning, offers a flexible prediction and coverage mechanism for learning -- an associated target language for learned knowledge -- that explicitly acknowledges the need to deal, in the context of learning, with uncertain, incomplete and inconsistent data that are incompatible with any classically-represented logical theory.