Confronting the Biases Embedded in Artificial Intelligence – The Markup

#artificialintelligence 

Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. Just this month, for example, Silicon Valley's much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends "to overrepresent people who are White-passing and Western concepts generally." For instance, it produces images of women for the prompt "a flight attendant" and images of men for the prompt "a builder." In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like "resistant" or "noncompliant") in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found