Researchers find evidence of bias in facial expression data sets
Researchers claim the data sets often used to train AI systems to detect expressions like happiness, anger, and surprise are biased against certain demographic groups. In a preprint study published on Arxiv.org, Machine learning algorithms become biased in part because they're provided training samples that optimize their objectives toward majority groups. Unless explicitly modified, they perform worse for minority groups -- i.e., people represented by fewer samples. In domains like facial expression classification, it's difficult to compensate for skew because the training sets rarely contain information about attributes like race, gender, and age.
Jul-24-2020, 17:10:06 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.06)
- North America > United States
- New York (0.06)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.37)
- Technology: