AI Algorithms Are Biased Against Skin With Yellow Hues
After evidence surfaced in 2018 that leading face-analysis algorithms were less accurate for people with darker skin, companies including Google and Meta adopted measures of skin tone to test the effectiveness of their AI software. New research from Sony suggests that those tests are blind to a crucial aspect of the diversity of human skin color. By expressing skin tone using only a sliding scale from lightest to darkest or white to black, today's common measures ignore the contribution of yellow and red hues to the range of human skin, according to Sony researchers. They found that generative AI systems, image-cropping algorithms, and photo analysis tools all struggle with yellower skin in particular. The same weakness could apply to a variety of technologies whose accuracy is proven to be affected by skin color, such as AI software for face recognition, body tracking, and deepfake detection, or gadgets like heart rate monitors and motion detectors. "If products are just being evaluated in this very one-dimensional way, there's plenty of biases that will go undetected and unmitigated," says Alice Xiang, lead research scientist and global head of AI Ethics at Sony.
Oct-3-2023, 11:00:00 GMT
- Genre:
- Research Report > New Finding (0.53)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.59)
- Machine Learning > Neural Networks (0.57)
- Vision (0.57)
- Information Technology > Artificial Intelligence