Goto

Collaborating Authors

Ocasio-Cortez is right, algorithms are biased -- but we can make them fairer

#artificialintelligence

Rep. Alexandria Ocasio-CortezAlexandria Ocasio-CortezHillicon Valley: New York says goodbye to Amazon's HQ2 AOC reacts: 'Anything is possible' FTC pushes for record Facebook fine Cyber threats to utilities on the rise Poll finds Democrats oppose certain aspects of border deal Ocasio-Cortez celebrates Amazon canceling New York offices: 'Anything is possible' MORE (D-N.Y.) recently began sounding the alarm about the potential pitfalls of using algorithms to automate human decision-making. She recently pointed out a fundamental problem with artificial intelligence (AI): "Algorithms are still made by human beings... if you don't fix the bias, then you are just automating the bias." She has continued to raise the issue on social media. Ocasio-Cortez isn't the only person questioning whether machines offer a foolproof way to improve decision-making by removing human error and bias. Algorithms are increasingly deployed to inform important decisions on everything from loans and insurance premiums to job and immigration applications.


We need to shine more light on algorithms so they can help reduce bias, not perpetuate it

#artificialintelligence

It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.


Inspecting Algorithms for Bias

MIT Technology Review

It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.


Teaching an AI to be less biased doesn't have to make it less accurate

New Scientist

Making an artificial intelligence less biased makes it less accurate, according to conventional wisdom, but that may not be true. A new way of testing AIs could help us build algorithms that are both fairer and more effective. The data sets we gather from society are infused with historical prejudice and AIs trained on them absorb this bias. This is worrying, as the technology is creeping into areas like job recruitment and the criminal justice system.


Artificial intelligence to enhance Australian judiciary system

#artificialintelligence

Sentences handed down by artificial intelligence would be fairer, more efficient, transparent and accurate than those of sitting judges, according to Swinburne researchers.