toxic stereotype
Artificial intelligence can determine racial identity from medical images
The study was published in The Lancet. The researchers realized that their study uncovered the possibility that AI could have a predisposition towards race. Although AI is used in medicine to diagnose illnesses with human-like reasoning and intelligence, the notion of a simulated machine having bias is concerning for researchers. They realize the pros and cons in creating AI that is so close to human intelligence. It can both transform health care, while also showing unintentional bias through its programming.
Robots turn racist and sexist with flawed AI, study finds: Neural networks built from biased Internet data teach robots to enact toxic stereotypes
The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT). "The robot has learned toxic stereotypes through these flawed neural network models," said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins' Computational Interaction and Robotics Laboratory. "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues." Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet.