northpointe
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Deck, Luca, Schomäcker, Astrid, Speith, Timo, Schöffer, Jakob, Kästner, Lena, Kühl, Niklas
The widespread use of artificial intelligence (AI) systems across various domains is increasingly surfacing issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved -- and what measures are available to aid this process -- are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we we distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- North America > United States > Hawaii (0.04)
- (6 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (0.67)
- Government (0.66)
Justitia ex Machina: The Case for Automating Morals
This piece was a finalist for the inaugural Gradient Prize. Machine Learning is a powerful technique to automatically learn models from data that have recently been the driving force behind several impressive technological leaps such as self-driving cars, robust speech recognition, and, arguably, better-than-human image recognition. We rely on these machine learning models daily; they influence our lives in ways we did not expect, and they are only going to become even more ubiquitous. Consider a couple of example machine learning models: 1) Detecting cats in images 2) Deciding which ads to show you online 3) Predicting which areas will suffer crime, and 4) Predicting how likely a criminal is to re-offend. The first two seem harmless enough.
- Health & Medicine (0.69)
- Transportation (0.56)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.47)
- Law (0.47)
Biases in AI Systems
This article provides an organization of various kinds of biases that can occur in the AI pipeline starting from dataset creation and problem formulation to data analysis and evaluation. It highlights the challenges associated with the design of bias-mitigation strategies, and it outlines some best practices suggested by researchers. Finally, a set of guidelines is presented that could aid ML developers in identifying potential sources of bias, as well as avoiding the introduction of unwanted biases. The work is meant to serve as an educational resource for ML developers in handling and addressing issues related to bias in AI systems.
- Health & Medicine (1.00)
- Information Technology (0.94)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Communications > Social Media (0.69)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.47)
The pitfalls of a 'retrofit human' in AI systems
Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper, but his story serves as a critical lesson for developers of AI systems. Petrov, who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union's Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction. As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles?
Cognitive Bias in Machine Learning – The Data Lab – Medium
Companies from a wide range of industries use machine learning data to do everyday business. From consumer marketing and workforce management to healthcare treatment decision solutions and public safety and policing solutions, whether you realize it or not your life is increasingly more affected by the outcomes of machine learning algorithms. Machine learning algorithms make decisions like who gets a bonus, a job interview, whether or not your credit card limit (or interest) is raised, and who gets into a clinical trial. Machine learning algorithms even help make decisions about who gets parole and who languishes in prison. The result is that people's lives and livelihood are affected by the decisions made by machines.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Simulation of Human Behavior (0.43)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Memory-Based Learning > Case Based Reasoning (0.40)
The Danger of Bias in an Al Tech Based Society
Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it. We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models.
- Europe > France (0.15)
- North America > United States > Wisconsin (0.05)
- North America > United States > Virginia (0.05)
- (6 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law > Criminal Law (0.72)
- Information Technology > Security & Privacy (0.70)
Don't Be Hit by the Analytics Backlash - International Institute for Analytics
One of my typical activities in May is to teach the Analytics Academy, a short program offered to Harvard Ph.D. students and graduates from the School of Arts and Sciences. The session is offered through the Office of Career Services and explores how Ph.Ds from various fields can secure jobs and prosper outside of academia in the field of analytics, big data, and artificial intelligence. Since that same office provided me with some very useful business orientation when I was seeking a (partially, as it turns out) nonacademic career, I am always happy to return the favor. I've been doing this program for almost a decade, and the students have always been very enthusiastic and positive about analytics. But this year was different.
- Law (1.00)
- Information Technology > Security & Privacy (0.30)
SXSW 2018: Protect AI, robots, cars (and us) from bias
As Mark Hamill humorously shared the behind-the-scenes of "Star Wars: The Last Jedi" with a packed SXSW audience, two floors below on the exhibit floor Universal Robots recreated General Grievous' famed light saber battles. The battling machines were steps away from a twelve foot dancing Kuka robot and an automated coffee dispensary. Somehow the famed interactive festival known for its late night drinking, dancing and concerts had a very mechanical feel this year. Everywhere debates ensued between utopian tech visionaries and dystopia-fearing humanists. Even my panel on "Investing In The Autonomy Economy" took a very social turn when discussing the opportunities of utilizing robots for the growing aging population.
- Media > Film (0.90)
- Leisure & Entertainment (0.90)
- Government > Regional Government > North America Government > United States Government (0.50)
We need to shine more light on algorithms so they can help reduce bias, not perpetuate it
It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.
- Media > News (0.36)
- Law > Criminal Law (0.31)