Managing the risks of inevitably biased visual artificial intelligence systems

#artificialintelligence 

Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, but also in the image datasets used to train computer vision models. As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition. Such computer vision models are used in downstream applications for security, surveillance, job candidate assessment, border control, and information retrieval.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found