Goto

Collaborating Authors

 make ai fairer


Facebook is trying to make AI fairer by paying people to give it data

#artificialintelligence

Artificial intelligence systems are often criticized for built-in biases. Commercial facial-recognition software, for instance, may fail when attempting to classify women and people of color. In an effort to help make AI fairer in a variety of ways, Facebook (FB) is rolling out a new data set for AI researchers that includes a diverse group of paid actors who were explicitly asked to provide their own ages and genders. Facebook hopes researchers will use the open-source data set, which it announced Thursday, to help judge whether AI systems work well for people of different ages, genders, skin tones, and in different types of lighting. Facebook also released the data set internally for use within Facebook itself; the company said in a blog post that it is "encouraging" teams to use it.


Can you make AI fairer than a judge? Play our courtroom algorithm game

#artificialintelligence

But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them--software engineers--are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness?