Partner Content
Three years ago, a Georgia Tech study uncovered a major flaw in self-driving vehicles: they find it much harder to see darker-skinned pedestrians. The researchers were testing how accurately the vehicles' artificial intelligence–based object detection models noticed pedestrians of different races. But no matter what variables they changed -- how big the person was in the image, whether they were partially blocked, what time of day it was -- the imbalance remained, raising fears that in real-life applications, racialized people could be at higher risk of being hit by a self-driving car. It's just one of far too many examples showing how AI can be biased and, as a result, harm already-marginalized groups. "Think of something like melanoma detection," says Shingai Manjengwa, director of technical education at the Vector Institute for Artificial Intelligence.
Jun-23-2022, 08:35:16 GMT
- Industry:
- Technology: