Goto

Collaborating Authors

classifier


5 Things You Don't Know About PyCaret - KDnuggets

#artificialintelligence

PyCaret is an open source machine learning library in Python to train and deploy supervised and unsupervised machine learning models in a low-code environment. It is known for its ease of use and efficiency. In comparison with the other open source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with a few words only. If you haven't used PyCaret before or would like to learn more, a good place to start is here. "After talking to many data scientists who use PyCaret on a daily basis, I have shortlisted 5 features of PyCaret that are lesser known but they extremely powerful."


Learner's World

#artificialintelligence

In continuation of my previous posts on various Performance measures for classifiers, here, I've explained the concept of single score measure namely; 'F - score'. In my previous posts, I had discussed four fundamental numbers, namely, true positive, true negative, false positive and false negative and eight basic ratios, namely, sensitivity(or recall or true positive rate) & specificity (or true negative rate), false positive rate (or type-I error) & false negative rates (or type-II error), positive predicted value (or precision) & negative predicted value, and false discovery rate (or q-value) & false omission rate. I had also discussed accuracy paradox, the relationship between various basic ratios and their trade-off to evaluate the performance of a classifier with examples. I'll be using the same confusion matrix for reference. Precision & Recall: First let's briefly revisit the understanding of'Precision (PPV) & Recall (sensitivity)'.


Facial Recognize in Python

#artificialintelligence

One of the most important concepts in facial analysis using images, is to define our region of interest (ROI), we must define in our image a specific part where we will filter or perform some operation. For example, if we need to filter the license plate of a car, our ROI is only on the license plate. The street, the body of the car and anything else that is present in the image is just a supporting part in this operation. In our example, we will use the opencv library, which already has supported to partition our image and help us identify our ROI. In our project we will use the ready-made classifier known as: Haar cascade classifier. This specific classifier will always work with gray images.


Introduction to Word Embedding

#artificialintelligence

Humans have always excelled at understanding languages. It is easy for humans to understand the relationship between words but for computers, this task may not be simple. For example, we humans understand the words like king and queen, man and woman, tiger and tigress have a certain type of relation between them but how can a computer figure this out? Word embeddings are basically a form of word representation that bridges the human understanding of language to that of a machine. They have learned representations of text in an n-dimensional space where words that have the same meaning have a similar representation.


Supporting the Math Behind Supporting Vector Machines!

#artificialintelligence

Support Vector Machine(SVM) is a powerful classifier that works with both linear and non-linear data. If you have a n-dimensional space, then the dimension of the hyperplane will be (n-1). The goal of SVM is to find an optimal hyperplane that best separates our data so that distance from the nearest points in space to itself is maximized. To keep it simple, consider a road, which separates the left, right-side cars, buildings, pedestrians and makes the widest lane as possible. And those cars, buildings, really close to the street are the support vectors.


Multi-Label Classification

#artificialintelligence

Classification has been a go-to approach for various problems for many years now. However, with problems becoming more and more specific a simple classification model can't be the solution for all of them. Rather than having one label/class for an instance, it's more appropriate to assign a subset of labels for an instance. This is exactly how multi-label classification is different from multi-class classification. For example, Classifying if a piece of audio file is a music file or not is a classification problem while classifying all the genres of forms in a piece of fusion music is a multi-label classification.


Measuring the performance of a Classification problem

#artificialintelligence

It is often convenient to combine precision and recall into a single metric called the F1 score, in particular, if you need a simple way to compare two classifiers. The F1 score is the harmonic mean of precision and recall. The F1 score favors classifiers that have similar precision and recall. This is not always what you want: in some contexts, you mostly care about precision, and in other contexts, you really care about the recall. For example, if you trained a classifier to detect videos that are safe for kids, you would probably prefer a classifier that rejects many good videos (low recall) but keeps only safe ones (high precision), rather than a classifier that has a much higher recall but lets a few really bad videos show up in your product (in such cases, you may even want to add a human pipeline to check the classifier's video selection). On the other hand, suppose you train a classifier to detect shoplifters on surveillance images: it is probably fine if your classifier has only 30% precision as long as it has 99% recall (sure, the security guards will get a few false alerts, but almost all shoplifters will get caught).


Categorizing Products at Scale

#artificialintelligence

With over 1M business owners now on Shopify, there are billions of products being created and sold across the platform. Just like those business owners, the products that they sell are extremely diverse! Even when selling similar products, they tend to describe products very differently. One may describe their sock product as a "woolen long sock," whereas another may have a similar sock product described as a "blue striped long sock." How can we identify similar products, and why is that even useful?


Deep Learning in Simple Words

#artificialintelligence

There are two main steps in the conventional machine learning or ML pipeline: feature extraction and classification. The goal of feature extraction is to represent data in a numerical space, also called feature space. The goal of classification is to determine the group that each data point belongs to. If we can simply design a classifier to separate data into classes within the feature space, it means that feature extraction and classification work as needed. However, the story is not always as simple as this.


Is Deep Learning Necessary For Simple Classification Tasks

#artificialintelligence

Deep learning (DL) models are known for tackling the nonlinearities associated with data, which the traditional estimators such as logistic regression couldn't. However, there is still a cloud of doubt with regards to the increased use of computationally intensive DL for simple classification tasks. To find out if DL really outperforms shallow models significantly, the researchers from the University of Pennsylvania experiment with three ML pipelines that involve traditional methods, AutoML and DL in a paper titled, 'Is Deep Learning Necessary For Simple Classification Tasks.' The UPenn researchers stated that a support-vector machine (SVM) model might predict more accurately susceptibility to a certain complex genetic disease than a gradient boosting model trained on the same dataset. Moreover, choosing different hyperparameters within that SVM model can vary performances.