Goto

Collaborating Authors

Credit


The EU is considering a ban on AI for mass surveillance and social credit scores

#artificialintelligence

The European Union is considering banning the use of artificial intelligence for a number of purposes, including mass surveillance and social credit scores. This is according to a leaked proposal that is circulating online, first reported by Politico, ahead of an official announcement expected next week. If the draft proposal is adopted, it would see the EU take a strong stance on certain applications of AI, setting it apart from the US and China. Some use cases would be policed in a manner similar to the EU's regulation of digital privacy under GDPR legislation. Member states, for example, would be required to set up assessment boards to test and validate high-risk AI systems.


When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity (SSIR)

#artificialintelligence

In 2019, Genevieve (co-author of this article) and her husband applied for the same credit card. Despite having a slightly better credit score and the same income, expenses, and debt as her husband, the credit card company set her credit limit at almost half the amount. This experience echoes one that made headlines later that year: A husband and wife compared their Apple Card spending limits and found that the husband's credit line was 20 times greater. Customer service employees were unable to explain why the algorithm deemed the wife significantly less creditworthy. Many institutions make decisions based on artificial intelligence (AI) systems using machine learning (ML), whereby a series of algorithms takes and learns from massive amounts of data to find patterns and make predictions.



Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

arXiv.org Artificial Intelligence

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.


AI models need to be 'interpretable' rather than just 'explainable'

#artificialintelligence

Last November, Apple ran into trouble after customers pointed out on Twitter that its credit card service was discriminating against women. David Heinemeir Hansson, the creator of Ruby on Rails, called Apple Card a sexist program. "Apple's black box algorithm thinks I deserve 20x the credit limit [my wife] does," he tweeted. The success of deep learning in the past decade has increased interest in the field of artificial intelligence. But the rising popularity of AI has also highlighted some of the key problems of the field, including the "black box problem," the challenge of making sense of the way complex machine learning algorithms make decisions.


Fairness in Credit Scoring: Assessment, Implementation and Profit Implications

arXiv.org Machine Learning

The rise of algorithmic decision-making has spawned much research on fair machine learning (ML). Financial institutions use ML for building risk scorecards that support a range of credit-related decisions. Yet, the literature on fair ML in credit scoring is scarce. The paper makes two contributions. First, we provide a systematic overview of algorithmic options for incorporating fairness goals in the ML model development pipeline. In this scope, we also consolidate the space of statistical fairness criteria and examine their adequacy for credit scoring. Second, we perform an empirical study of different fairness processors in a profit-oriented credit scoring setup using seven real-world data sets. The empirical results substantiate the evaluation of fairness measures, identify more and less suitable options to implement fair credit scoring, and clarify the profit-fairness trade-off in lending decisions. Specifically, we find that multiple fairness criteria can be approximately satisfied at once and identify separation as a proper criterion for measuring the fairness of a scorecard. We also find fair in-processors to deliver a good balance between profit and fairness. More generally, we show that algorithmic discrimination can be reduced to a reasonable level at a relatively low cost.


Predicting Credit Card Defaults with Machine Learning

#artificialintelligence

Sometimes the best model is the simplest. The model with minimal manipulation yielded the highest recall score of 0.95. After feature selection and hyperparameter tuning, recall decreased to 0.79. Overfitting means the model is strong at predicting the data on which it was trained, but weak at generalizing to unseen data. The validation score is similar to the test score, so we know it's performing similarly on completely unseen.


How AI Impacts Personal Loan Decisions?

#artificialintelligence

Artificial Intelligence (AI)-driven lending practices are gaining visibility and credibility. AI tools used with machine learning can analyze more data for a more accurate answer to loan requests. Lenders using new AI systems can evaluate bank account balances calculated with purchase history, social media habits, and utility payments to determine a person's creditworthiness. Those without established credit can benefit greatly from AI lenders. New startup lenders are using AI to approve personal loans for people with a short or non-existing credit history who have a reliable income and a high earning potential.


Council Post: In AI (Can) We Trust?

#artificialintelligence

Artificial intelligence (AI) is the best thing to happen to our lives. It helps us read our emails, complete our sentences, get directions, do online shopping, get dining and entertainment recommendations, and even make it easier to connect with old friends or make new ones on social media. AI is not only skilling itself at many human jobs; it is also making decisions for us. The question is whether these decisions can be trusted. To elaborate, does AI-aided recruitment facilitate or reject the right candidate selection? Is the Tinder match made in heaven or by the algorithm?


How to Learn when Data Reacts to Your Model: Performative Gradient Descent

arXiv.org Machine Learning

Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer's risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce performative gradient descent (PerfGD), which is the first algorithm which provably converges to the performatively optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.