Goto

Collaborating Authors

Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges

arXiv.org Machine Learning

Machine learning algorithms are now frequently used in sensitive contexts that substantially affect the course of human lives, such as credit lending or criminal justice. This is driven by the idea that `objective' machines base their decisions solely on facts and remain unaffected by human cognitive biases, discriminatory tendencies or emotions. Yet, there is overwhelming evidence showing that algorithms can inherit or even perpetuate human biases in their decision making when they are based on data that contains biased human decisions. This has led to a call for fairness-aware machine learning. However, fairness is a complex concept which is also reflected in the attempts to formalize fairness for algorithmic decision making. Statistical formalizations of fairness lead to a long list of criteria that are each flawed (or harmful even) in different contexts. Moreover, inherent tradeoffs in these criteria make it impossible to unify them in one general framework. Thus, fairness constraints in algorithms have to be specific to the domains to which the algorithms are applied. In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits.


Fairness in algorithmic decision-making

#artificialintelligence

Algorithmic or automated decision systems use data and statistical analyses to classify people for the purpose of assessing their eligibility for a benefit or penalty. Such systems have been traditionally used for credit decisions, and currently are widely used for employment screening, insurance eligibility, and marketing. They are also used in the public sector, including for the delivery of government services, and in criminal justice sentencing and probation decisions. Most of these automated decision systems rely on traditional statistical techniques like regression analysis. Recently, though, these systems have incorporated machine learning to improve their accuracy and fairness. These advanced statistical techniques seek to find patterns in data without requiring the analyst to specify in advance which factors to use. They will often find new, unexpected connections that might not be obvious to the analyst or follow from a common sense or theoretic understanding of the subject matter. As a result, they can help to discover new factors that improve the accuracy of eligibility predictions and the decisions based on them.


Bias, Fairness, and Accountability with AI and ML Algorithms

arXiv.org Machine Learning

The advent of AI and ML algorithms has led to opportunities as well as challenges. In this paper, we provide an overview of bias and fairness issues that arise with the use of ML algorithms. We describe the types and sources of data bias, and discuss the nature of algorithmic unfairness. This is followed by a review of fairness metrics in the literature, discussion of their limitations, and a description of de-biasing (or mitigation) techniques in the model life cycle.


Racial Bias and Gender Bias Examples in AI systems

#artificialintelligence

I have been thinking of interactive ways of getting my postgraduate thesis on Racial Bias, Gender Bias, AI new ways to approach Human Computer Interaction out to everyone. Life has been super busy so I have decided to add snippets of the thesis for now. For this research paper, the researcher has identified a number of areas of concern in regards to systems powered by AI being deployed in situations that affect the lives of humans. These examples will be used to further highlight this area of concern. Suggestions have made that decision-support systems powered by AI can be used to augment human judgement and reduce both conscious and unconscious biases (Anderson & Anderson, 2007).


Programming Fairness in Algorithms

#artificialintelligence

Being good is easy, what is difficult is being just. We need to defend the interests of those whom we've never met and never will. Note: This article is intended for a general audience to try and elucidate the complicated nature of unfairness in machine learning algorithms. As such, I have tried to explain concepts in an accessible way with minimal use of mathematics, in the hope that everyone can get something out of reading this. Supervised machine learning algorithms are inherently discriminatory. They are discriminatory in the sense that they use information embedded in the features of data to separate instances into distinct categories -- indeed, this is their designated purpose in life. This is reflected in the name for these algorithms which are often referred to as discriminative algorithms (splitting data into categories), in contrast to generative algorithms (generating data from a given category). When we use supervised machine learning, this "discrimination" is used as an aid to help us categorize our data into distinct categories within the data distribution, as illustrated below. Whilst this occurs when we apply discriminative algorithms -- such as support vector machines, forms of parametric regression (e.g. For example, using last week's weather data to try and predict the weather tomorrow has no moral valence attached to it.