AI models can be racist even if they're trained on fair data

#artificialintelligence 

AI algorithms can still come loaded with racial bias, even if they're trained on data more representative of different ethnic groups, according to new research. An international team of researchers analyzed how accurate algorithms were at predicting various cognitive behaviors and health measurements from brain fMRI scans, such as memory, mood, and even grip strength. Medical datasets are often skewed – they're not collected from a diverse enough sample size, and certain groups of the population are left out or misrepresented. It's not surprising if predictive models that try to detect skin cancer, for example, aren't as effective when analyzing darker skin tones than lighter ones. Biased datasets are often the source for why AI models are also biased.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found