Goto

Collaborating Authors

 northpointe


Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle

Deck, Luca, Schomäcker, Astrid, Speith, Timo, Schöffer, Jakob, Kästner, Lena, Kühl, Niklas

arXiv.org Artificial Intelligence

The widespread use of artificial intelligence (AI) systems across various domains is increasingly surfacing issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved -- and what measures are available to aid this process -- are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we we distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.


Justitia ex Machina: The Case for Automating Morals

#artificialintelligence

This piece was a finalist for the inaugural Gradient Prize. Machine Learning is a powerful technique to automatically learn models from data that have recently been the driving force behind several impressive technological leaps such as self-driving cars, robust speech recognition, and, arguably, better-than-human image recognition. We rely on these machine learning models daily; they influence our lives in ways we did not expect, and they are only going to become even more ubiquitous. Consider a couple of example machine learning models: 1) Detecting cats in images 2) Deciding which ads to show you online 3) Predicting which areas will suffer crime, and 4) Predicting how likely a criminal is to re-offend. The first two seem harmless enough.


Biases in AI Systems

Communications of the ACM

This article provides an organization of various kinds of biases that can occur in the AI pipeline starting from dataset creation and problem formulation to data analysis and evaluation. It highlights the challenges associated with the design of bias-mitigation strategies, and it outlines some best practices suggested by researchers. Finally, a set of guidelines is presented that could aid ML developers in identifying potential sources of bias, as well as avoiding the introduction of unwanted biases. The work is meant to serve as an educational resource for ML developers in handling and addressing issues related to bias in AI systems.


Biases in AI Systems - ACM Queue

#artificialintelligence

It is important to understand the structural dependencies among various features in the dataset. Often, it helps to draw a structural diagram illustrating various features of interest and their interdependencies. This can then help in identifying the sources of bias.20


The pitfalls of a 'retrofit human' in AI systems

#artificialintelligence

Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper, but his story serves as a critical lesson for developers of AI systems. Petrov, who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union's Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction. As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles?


Cognitive Bias in Machine Learning – The Data Lab – Medium

#artificialintelligence

Companies from a wide range of industries use machine learning data to do everyday business. From consumer marketing and workforce management to healthcare treatment decision solutions and public safety and policing solutions, whether you realize it or not your life is increasingly more affected by the outcomes of machine learning algorithms. Machine learning algorithms make decisions like who gets a bonus, a job interview, whether or not your credit card limit (or interest) is raised, and who gets into a clinical trial. Machine learning algorithms even help make decisions about who gets parole and who languishes in prison. The result is that people's lives and livelihood are affected by the decisions made by machines.


The Danger of Bias in an Al Tech Based Society

#artificialintelligence

Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it. We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models.


Don't Be Hit by the Analytics Backlash - International Institute for Analytics

#artificialintelligence

One of my typical activities in May is to teach the Analytics Academy, a short program offered to Harvard Ph.D. students and graduates from the School of Arts and Sciences. The session is offered through the Office of Career Services and explores how Ph.Ds from various fields can secure jobs and prosper outside of academia in the field of analytics, big data, and artificial intelligence. Since that same office provided me with some very useful business orientation when I was seeking a (partially, as it turns out) nonacademic career, I am always happy to return the favor. I've been doing this program for almost a decade, and the students have always been very enthusiastic and positive about analytics. But this year was different.


SXSW 2018: Protect AI, robots, cars (and us) from bias

Robohub

As Mark Hamill humorously shared the behind-the-scenes of "Star Wars: The Last Jedi" with a packed SXSW audience, two floors below on the exhibit floor Universal Robots recreated General Grievous' famed light saber battles. The battling machines were steps away from a twelve foot dancing Kuka robot and an automated coffee dispensary. Somehow the famed interactive festival known for its late night drinking, dancing and concerts had a very mechanical feel this year. Everywhere debates ensued between utopian tech visionaries and dystopia-fearing humanists. Even my panel on "Investing In The Autonomy Economy" took a very social turn when discussing the opportunities of utilizing robots for the growing aging population.


We need to shine more light on algorithms so they can help reduce bias, not perpetuate it

#artificialintelligence

It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.