Finding the Fairness in AI

#artificialintelligence 

Explains Nikola Konstantinov of Switzerland's ETH Zürich, "Fairness in AI is about ensuring that AI models don't discriminate when they're making decisions, particularly with respect to protected attributes like race, gender, or country of origin." As artificial intelligence (AI) becomes more widely used to make decisions that affect our lives, making certain it is fair is a growing concern. Algorithms can incorporate bias from several sources, from the people involved in different stages of their development to modelling choices that introduce or amplify unfairness. A machine learning system used by Amazon to pre-screen job applicants was found to display bias against women, for example, while an AI system used to analyze brain scans failed to perform equally well across people of different races. "Fairness in AI is about ensuring that AI models don't discriminate when they're making decisions, particularly with respect to protected attributes like race, gender, or country of origin," says Nikola Konstantinov, a post-doctoral fellow at the ETH AI Center of ETH Zürich, in Switzerland. Researchers typically use mathematical tools to measure the fairness of machine learning systems based on a specific definition of fairness.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found