Goto

Collaborating Authors

Results


From Modeling to Scoring: Correcting Predicted Class Probabilities in Imbalanced Datasets

#artificialintelligence

Model evaluation is an important part of a data science project and it's exactly this part that quantifies how good your model is, how much it has improved from the previous version, how much better it is than your colleague's model, and how much room for improvement there still is. It is not unusual in machine learning applications to deal with imbalanced datasets such as fraud detection, computer network intrusion, medical diagnostics, and many more. Data imbalance refers to unequal distribution of classes within a dataset, namely that there are far fewer events in one class in comparison to the others. If, for example we have credit card fraud detection dataset, most of the transactions are not fraudulent and very few can be classed as fraud detections. This underrepresented class is called the minority class, and by convention, the positive class.


Why the high accuracy in classification is not always correct?

#artificialintelligence

Classification accuracy is a statistic that describes a classification model's performance by dividing the number of correct predictions by the total number of predictions. It is simple to compute and comprehend, making it the most often used statistic for assessing classifier models. But not in every scenario accuracy score is to be considered the best metric to evaluate the model. In this article, we will discuss the reasons not to believe in the accuracy performance parameter completely. Following are the topics to be covered.


Predicting Rain with Machine Learning

#artificialintelligence

Let's read in all the data we have. Since all the region data share a primary key date, we can connect them with concat() in pandas and set the keys as the region names. I don't want the regions as the index, so we can reset the index and then rename some columns to get the data in the right shape. Let's first visualize our target class. It appears we have in our hands an imbalanced class, as the N label is dominating the rest of the classes.


A technique for making quantum computing more resilient to noise, which boosts performance

#artificialintelligence

Quantum computing continues to advance at a rapid pace, but one challenge that holds the field back is mitigating the noise that plagues quantum machines. This leads to much higher error rates compared to classical computers. This noise is often caused by imperfect control signals, interference from the environment, and unwanted interactions between qubits, which are the building blocks of a quantum computer. Performing computations on a quantum computer involves a "quantum circuit," which is a series of operations called quantum gates. These quantum gates, which are mapped to the individual qubits, change the quantum states of certain qubits, which then perform the calculations to solve a problem.


Evaluating classification models with Kolmogorov-Smirnov (KS) test

#artificialintelligence

In most binary classification problems we use the ROC Curve and ROC AUC score as measurements of how well the model separates the predictions of the two different classes. I explain this mechanism in another article, but the intuition is easy: if the model gives lower probability scores for the negative class, and higher scores for the positive class, we can say that this is a good model. Now here's the catch: we can also use the KS-2samp test to do that! The KS statistic for two samples is simply the highest distance between their two CDFs, so if we measure the distance between the positive and negative class distributions, we can have another metric to evaluate classifiers. There is a benefit for this approach: the ROC AUC score goes from 0.5 to 1.0, while KS statistics range from 0.0 to 1.0.


Confusion Matrix without Confused

#artificialintelligence

As we know, the output for classification problem is consists from two target variables, either 0 or 1; Yes or No; Positive or Negative; etc. and our model is trying to classify whether a specific data is 0 or 1; Yes or No; etc. The columns are representing the True Class, which means true or real label for the specific data. The rows are representing the Predicted Class, which means the prediction results derived from our model for the specific use case. True Positive (TP) TP is simply the count of data where the Predicted value is Positive and True value is Positive too. True Negative (TN) TN is simply the count of data where the Predicted value is Negative and True value is Negative too.


A new perspective on classification: optimally allocating limited resources to uncertain tasks

arXiv.org Machine Learning

A central problem in business concerns the optimal allocation of limited resources to a set of available tasks, where the payoff of these tasks is inherently uncertain. In credit card fraud detection, for instance, a bank can only assign a small subset of transactions to their fraud investigations team. Typically, such problems are solved using a classification framework, where the focus is on predicting task outcomes given a set of characteristics. Resources are then allocated to the tasks that are predicted to be the most likely to succeed. However, we argue that using classification to address task uncertainty is inherently suboptimal as it does not take into account the available capacity. Therefore, we first frame the problem as a type of assignment problem. Then, we present a novel solution using learning to rank by directly optimizing the assignment's expected profit given limited, stochastic capacity. This is achieved by optimizing a specific instance of the net discounted cumulative gain, a commonly used class of metrics in learning to rank. Empirically, we demonstrate that our new method achieves higher expected profit and expected precision compared to a classification approach for a wide variety of application areas and data sets. This illustrates the benefit of an integrated approach and of explicitly considering the available resources when learning a predictive model.


Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

arXiv.org Artificial Intelligence

As AI-based systems increasingly impact many areas of our lives, auditing these systems for fairness is an increasingly high-stakes problem. Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment. Counterfactual fairness describes an individualized notion of fairness but is even more challenging to evaluate after deployment. We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers. Prediction sensitivity helps answer the question: would this prediction have been different, if this individual had belonged to a different demographic group -- for every prediction made by the deployed model. Prediction sensitivity can leverage correlations between protected status and other features and does not require protected status information at prediction time. Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.


Latent gaze information in highly dynamic decision-tasks

arXiv.org Artificial Intelligence

Digitization is penetrating more and more areas of life. Tasks are increasingly being completed digitally, and are therefore not only fulfilled faster, more efficiently but also more purposefully and successfully. The rapid developments in the field of artificial intelligence in recent years have played a major role in this, as they brought up many helpful approaches to build on. At the same time, the eyes, their movements, and the meaning of these movements are being progressively researched. The combination of these developments has led to exciting approaches. In this dissertation, I present some of these approaches which I worked on during my Ph.D. First, I provide insight into the development of models that use artificial intelligence to connect eye movements with visual expertise. This is demonstrated for two domains or rather groups of people: athletes in decision-making actions and surgeons in arthroscopic procedures. The resulting models can be considered as digital diagnostic models for automatic expertise recognition. Furthermore, I show approaches that investigate the transferability of eye movement patterns to different expertise domains and subsequently, important aspects of techniques for generalization. Finally, I address the temporal detection of confusion based on eye movement data. The results suggest the use of the resulting model as a clock signal for possible digital assistance options in the training of young professionals. An interesting aspect of my research is that I was able to draw on very valuable data from DFB youth elite athletes as well as on long-standing experts in arthroscopy. In particular, the work with the DFB data attracted the interest of radio and print media, namely DeutschlandFunk Nova and SWR DasDing. All resulting articles presented here have been published in internationally renowned journals or at conferences.


Everything You Need to Know to Build an Amazing Binary Classifier

#artificialintelligence

There are two general types of supervised machine learning approaches in their simplest form. First, you can have a regression problem, where you're trying to predict a continuous variable, such as the temperature or a stock price. The second is a classification problem where you want to predict a categorical variable such as pass/fail or spam/ham. Additionally, we can have binary classification problems that we'll cover here with only two outcomes and multi-class classification with more than two outcomes. We want to take several steps to prepare our data for Machine Learning.