Goto

Collaborating Authors

Accuracy


Confusing metrics around the Confusion Matrix

#artificialintelligence

"If you can't measure it, you can't possibly improve it" . In the field of Machine Learning and Data Science, especially with statistical classification, a "Confusion Matrix" is often used to derive a bunch of metrics that can be examined to either improve the performance of a classifier model or to compare the performance of multiple models. Instead of starting from the mathematical formulae for the metrics, we will try to intuitively derive the formulae based on basic concepts. It is probably called "confusion" because it depicts how much confused the classifier was while doing its predictions -- some classes were correctly classified and some were not. The most important concept to understand before exploring any metric from the confusion matrix is the true meaning of the "positive" and the "negative" class in the context of the problem given to the classifier. The Positive class is the existence what we are trying to detect or predict.


Giuliano Liguori on LinkedIn: #BigData #Analytics #DataScience

#artificialintelligence

The variable you want to predict is called the dependent variable. The variable you are using to predict the other variable's value is called the independent variable. K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data. It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset. The Naive Bayes classification algorithm is a probabilistic classifier.


Building Transparency Into AI Projects - AI Summary

#artificialintelligence

That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation? If the product team doesn't explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations. In designing the model, the data scientists reasonably thought that erroneously marking an x-ray as negative when in fact, the x-ray does show a cancerous tumor can have very dangerous consequences and so they set a low tolerance for false negatives and, thus, a high tolerance for false positives. Had they been properly informed -- had the design decision been made transparent to the end-user -- the radiologists may have thought, I really don't see anything here and I know the AI is overly sensitive, so I'm going to move on. By being transparent from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions. Consider, for instance, a financial advisor who hides the existence of some investment opportunities and emphasizes the potential upsides of others because he gets a larger commission when he sells the latter. The more general point is that AI can undermine people's autonomy -- their ability to see the options available to them and to choose among them without undue influence or manipulation. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it's monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To "prove" it was human, the company trained the AI to insert "umms" and "ahhs" into its request: for instance, "When would I like the reservation?


Machine Learning on a Large Scale

#artificialintelligence

The ROC curve is also used in order to compute the area under the ROC curve metric. The ROC curve of a perfect model will approach the top-left corner, whilst a random model will approach the diagonal (True positive rate False positive rate). The area under the ROC curve ranges between 0. and 1 and can be computed via a BinaryClassificationEvaluator object The result is impressive, despite the attempt to hamper the model quality. The area under the ROC curve for the training set can be obtained from the model summary lr_model.summary.areaUnderROC. The BinaryClassificationEvaluator object can also be used to compute the area under the PR curve.


Python for Machine Learning: A Tutorial

#artificialintelligence

Python has become the most popular data science and machine learning programming language. But in order to obtain effective data and results, it's important that you have a basic understanding of how it works with machine learning. In this introductory tutorial, you'll learn the basics of Python for machine learning, including different model types and the steps to take to ensure you obtain quality data, using a sample machine learning problem. In addition, you'll get to know some of the most popular libraries and tools for machine learning. Machine learning (ML) is a form of artificial intelligence (AI) that teaches computers to make predictions and recommendations and solve problems based on data. Its problem-solving capabilities make it a useful tool in industries such as financial services, healthcare, marketing and sales, and education among others. There are three main types of machine learning: supervised, unsupervised, and reinforcement. In supervised learning, the computer is given a set of training data that includes both the input data (what we want to predict) and the output data (the prediction).


From Modeling to Scoring: Correcting Predicted Class Probabilities in Imbalanced Datasets

#artificialintelligence

Model evaluation is an important part of a data science project and it's exactly this part that quantifies how good your model is, how much it has improved from the previous version, how much better it is than your colleague's model, and how much room for improvement there still is. It is not unusual in machine learning applications to deal with imbalanced datasets such as fraud detection, computer network intrusion, medical diagnostics, and many more. Data imbalance refers to unequal distribution of classes within a dataset, namely that there are far fewer events in one class in comparison to the others. If, for example we have credit card fraud detection dataset, most of the transactions are not fraudulent and very few can be classed as fraud detections. This underrepresented class is called the minority class, and by convention, the positive class.


The Mystery of ADASYN is Revealed

#artificialintelligence

This research assumes that you are familiar with class imbalance and the ADASYN algorithm. We strongly encourage our readers to review the conference article that launched ADASYN (just type that into Google Scholar or see the References section of this document), and then read any number of articles in Towards Data Science that discuss class imbalance and ADASYN. Because this is neither a guide nor an overview; it is voyage into uncharted waters with startling discoveries. The answers are 1) surprising, 2) fascinating, and 3) extraordinary, in that order. All models in this research were conducted using the RandomForest and LogisticRegression algorithms in the sci-kit learn library to gain information about both tree and linear structures, respectively. All predictive models were 10-fold cross-validated with stratified sampling using "stratify y" in train_test_split and "cv 10" in GridSearchCV.


Python: Confusion Matrix

#artificialintelligence

A confusion matrix is a supervised machine learning evaluation tool that provides more insight into the overall effectiveness of a machine learning classifier. Unlike a simple accuracy metric, which is calculated by dividing the number of correctly predicted records by the total number of records, confusion matrices return 4 unique metrics for you to work with. While I am not saying accuracy is always misleading, there are times, especially when working with examples of imbalanced data, that accuracy can be all but useless. Let's consider credit card fraud. It is not uncommon that given a list of credit card transactions, that a fraud event might make up a little as 1 in 10,000 records.


Artificial Intelligence Colonoscopy System Shows Promise

#artificialintelligence

Laird Harrison writes about science, health and culture. His work has appeared in national magazines, in newspapers, on public radio and on websites. He is at work on a novel about alternate realities in physics. Harrison teaches writing at the Writers Grotto.


Drowning in Data

#artificialintelligence

In 1945 the volume of human knowledge doubled every 25 years. Now, that number is 12 hours [1]. With our collective computational power rapidly increasing, vast amounts of data and our ability to assimilate it, has seeded unprecedented fertile ground for innovation. Healthtech companies are rapidly sprouting from data ridden soil at exponential rates. Cell free DNA companies, once a rarity, are becoming ubiquitous. The genomics landscape, once dominated by the few, are being inundated by a slew of competitors. Grandiose claims of being able to diagnose 50 different cancers from a single blood sample, or use AI to best dermatologists, radiologists, pathologists, etc., are being made at alarming rates. Accordingly, it's imperative to know how to assess these claims as fact or fiction, particularly when such claimants may employ "statistical misdirection". In this addition to "The Insider's Guide to Translational Medicine" we disarm perpetrators of statistical warfare of their greatest ...