Study suggests that AI model selection might introduce bias

#artificialintelligence 

Register for a free or VIP pass today. The past several years have made it clear that AI and machine learning are not a panacea when it comes to fair outcomes. Applying algorithmic solutions to social problems can magnify biases against marginalized peoples; undersampling populations always results in worse predictive accuracy. But bias in AI doesn't arise from the datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can contribute.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found