Tool helps clear biases from computer vision
Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems. The work is part of a larger effort to remedy and prevent the biases that have crept into AI systems that influence everything from credit services to courtroom sentencing programs. Although the sources of bias in AI systems are varied, one major cause is stereotypical images contained in large sets of images collected from online sources that engineers use to develop computer vision, a branch of AI that allows computers to recognize people, objects and actions. Because the foundation of computer vision is built on these data sets, images that reflect societal stereotypes and biases can unintentionally influence computer vision models. To help stem this problem at its source, researchers in the Princeton Visual AI Lab have developed an open-source tool that automatically uncovers potential biases in visual data sets.
Oct-4-2020, 16:47:11 GMT
- Country:
- Europe (0.15)
- North America > United States
- Virginia (0.05)
- Genre:
- Research Report (0.48)
- Technology:
- Information Technology > Artificial Intelligence > Vision (1.00)